Comparative Analysis of SDG Implementation Evolution Worldwide

Author

Lodrik Adam, Sofia Benczédi, Stefan Favre, Delia Fuchs

Published

December 11, 2023

1 Introduction

1.1 Overview and Motivation

The global significance of the SDGs is our basis. The adoption of the SDGs by the United Nation in 2015 marked a significant global commitment to address pressing issues such as poverty, inequality, climate, change, and more. The fact that these goals were unanimously adopted by 193 member states underscores their importance. This prompted us to ask ourselves, can we evaluate the progress? What has really been done so far? Although the SDGs have attracted considerable attention and backing, it is essential to evaluate the events preceding and following their implementation. Understanding the actions taken and progress made is essential in determining if these global commitments are resulting in tangible enhancements to individuals’ lives. By examining the evolution of all countries and their respective contributions towards achieving the SDGs, we can develop a comprehensive understanding of collective efforts and identify potential disparities or gaps.

1.3 Research questions

  1. Focus on factors: What can explain the state of the countries regarding sustainable development? (we will analyse different factors: scores from the human freedom index, GDP per capita, military expenditures in % of GDP/government expenditure, unemployment rate, internet usage). See data description for more precise information about the factors.

  2. Focus on time: How has the adoption of the SDGs in 2015 influenced the achievement of SDGs? (we want to compare the achievement (SDG scores: there are scores calculated even before the adoption) of the different countries before and after 2015 to see if the adoption of SDG gave a real “push” to sustainable development)

  3. Focus on events: Is the evolution in sustainable development influenced by uncontrollable events, such as economic crisis, health crises and natural disasters? (we will analyse the impact of the COVID, natural disasters and conflicts (# deaths, damages, etc.) on the SDG scores). See data description for more precise information about how the impact of these events are materialized into data.

  4. Focus on relationship between SDGs: How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).

2 Data

2.1 Sources

We are collecting our Data from the sustainability development report (SDG), the international labour organization (ILOSTAT), the World Bank, Our world in data, the CATO institute, one from Kaggle (disasters: we couldn’t find relevant accessible information from somewhere else) and GitHub. We found different datasets containing useful information in relation with the SDGs. The details about these data and the links are presented in the next section. Utilizing the kableExtra package, we provide a comprehensive list and corresponding links to our sources, as outlined below:

Name of the Table Source
D1_1_SDG dashboards.sdgindex.org
D2_2_Unemployment_rate ilo.org
D3_0_GDP_per_capita data.worldbank.org
D3_1_Military_expenditure_percent_GDP data.worldbank.org
D3_2_Military_expenditure_percent_gov_exp data.worldbank.org
D4_0_Internet_usage ourworldindata.org
D5_0_Human_freedom_index cato.org
D6_0_Disaters kaggle.com
D7_0_COVID github.com
D8_0_Conflicts datacatalog.worldbank.org

2.2 Description

During the wrangling process, we added data to our table (D1_1_SDG) from different other datasets and match them based on the country code, and the year. The tables below show all the variables present in our 9 databases. We will then merge them to have our final table for the analysis.

2.2.1 Our databases

Sustainable Development Goals database (D1_1_SDG)

The Sustainable Development Goals (SDGs) are a universal set of 17 interlinked goals that were adopted by the United Nations in 2015 as part of the 2030 Agenda for Sustainable Development. These goals provide a shared blueprint for peace and prosperity for people and the planet, now and into the future.

Our primary database focuses on the Sustainable Development Goals (SDG). Below is a table summarizing the key variables included:

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
overallscore Overall score on all 17 SDGs (the score are % of achievement of the goals determined by the UN based on several indicators)
goal1:goal17 Score on each SDG except SDG 14 (16 variables)
population Population of the country

Unemployment rate database (D2_2_Unemployment_rate)

This database give us comprehensive data on the unemployment rates for each country from 2000 to 2022. Originally, it included categories based on various age groups. However, for simplicity and coherence, the database has been streamlined to focus exclusively on the unemployment rates of individuals aged 15 years and older.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
unemployment.rate Unemployment rate (% of the population 15 years old and older)

GDP per capita database (D3_0_GDP_per_capita)

This database offers detailed information on the GDP per capita in dollars for various countries, covering the period from 2000 to 2022. It is designed to provide insights into the economic performance of each country over these years, measured through the lens of per capita GDP.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
GDPpercapita GDP per capita

Proportion of the GDP dedicated to Military expenditures database (D3_1_Military_expenditure_percent_GDP)

This database provides an insightful view of the proportion of their respective GDPs that countries have allocated to military expenditures. It covers the period from 2000 to 2022.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
MilitaryExpenditurePercentGDP Military expenditures in percentage of GDP

Internet usage database (D4_0_Internet_usage)

This database provides information on the percentage of the population that uses the internet in each country. It covers the period from 2000 to 2022.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
internet.usage Internet usage (% of the population)

Human freedom index database (D5_0_Human_freedom_index)

This database provides information on the Human Freedom Index (HFI) for each country. The HFI is a composite index that measures the degree to which people are free to enjoy important rights and freedoms. It covers the period from 2000 to 2022.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
region Part of the world, group of countries (e.g. Eastern Europe, Dub-Saharan Africa, South Asia, etc.)
pf_law Rule of law, mean score of: Procedural justice, Civil, justice, Criminal justice, Rule of law (V-Dem)
pf_security Security and safety, mean score of: Homicide, Disappearances conflicts, terrorism
pf_movement Freedom of movement (V-Dem), Freedom of movement (CLD)
pf_religion Freedom of religion, Religious organization, repression
pf_assembly Civil society entry and exit, Freedom of assembly, Freedom to form/run political parties, Civil society repression
pf_expression Direct attacks on the press, Media and expression (V-Dem), Media and expression (Freedom House), Media and expression (BTI), Media and expression (CLD)
pf_identity Same-sex relationships, Divorce, Inheritance rights, Female genital mutilation
ef_gouvernment Government consumption, Transfers and subsidies, Government investment, Top marginal tax rate, State ownership of assets
ef_legal Judicial independence, Impartial courts, Protection of property rights, Military interference Integrity of the legal system Legal enforcementof contracts, Regulatory costs, Reliability of police
ef_money Money growth, Standard deviation of inflation, Inflation: Most recent year, Freedom to own foreign currency
ef_trade Tariffs, Regulatory trade barriers, Black-market exchange rates, Movement of capital and people
ef_regulation Credit market regulations, Labor market regulations, Business regulations

Disaster list database (D6_0_Disaters)

This database provides information on the number of deaths, injured, affected and homeless people, as well as the total number of affected people and the total of infrastructure damages caused by disasters in each country. It covers the period from 2000 to 2021.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2021)
continent Continents touched by the disasters such as floods, ouragan
total_deaths Number of deaths caused by disasters
no_injured Number of injured caused by disasters
no_affected Number of affected caused by disasters
no_homeless Number of homeless caused by disasters
total_affected Total number of affected caused by disasters
total_damages Total of infrastructure damages

COVID database (D7_0_COVID)

This database provides information on the number of people dead due to COVID, the number of COVID cases and the Government Response Stringency Index in each country. It covers the period from 2020 to 2022.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2020-2022)
deaths_per_million Number of people dead due to COVID
cases_per_million Number of COVID cases
stringency Government Response Stringency Index: composite measure based on 9 response indicators including school closures, workplace closures, and trave

Conflicts database (D8_0_Conflicts)

This database provides information on the number of deaths, the number of people affected and the maximum intensity of conflicts in each country. It covers the period from 2000 to 2022.

Variable Name Explanation
code Country code (ISO)
country Country name
year Year of the observation (2000-2022)
ongoing Variable coded 1 for more than 25 deaths in intrastate conflict and 0 otherwise according to UCDP/PRIO Armed Conflict Dataset 17.1.
sum_deaths Best estimate of deaths in all categories of violence (non-state, one-sided and state-based) recorded by the Uppsala Conflict Data Program in the country based on the UCDP GED dataset (unpublished 2016 data). The location of these events is used for estimating the extent of violence.
pop_affected Share of population affected by violence in percentage (0 to 100) measured as described above based on population data from CIESIN, the PRIO-GRID structure as well as UCDP GED.
area_affected Area affected by conflict
maxintensity Two different intensity levels are coded: minor armed conflicts (1) and wars (2), Takes the max intensity of conflict in the country so that it is coded 2 if there is at least one war (>=1000 deaths in intrastate conflict) during the year. Data from UCDP/PRIO Armed Conflict Dataset 17.1.

2.3 Wrangling/cleaning

2.3.1 Pre-cleaning

To accommodate the large scale of the datasets, we pre-cleaned each one prior to merging. This streamlined the process, simplifying the cleaning of the final, combined dataset. The treatment of missing values wil be taken care of after merging our datasets.

2.3.1.1 Dataset on SDG

This is our main dataset, that we clean in order to keep the columns containing the following information: country name, country code, year, population, overall score and the SDGs scores.

We start by importing the data and converting it into a DataFrame. Next, we rename the columns and convert the scores into numeric variables.

Code
#### D1_0_SDG importation ####

D1_0_SDG <- read.csv(here("scripts","data","SDG.csv"), sep = ";")

D1_0_SDG <- D1_0_SDG[,1:22]

colnames(D1_0_SDG) <- c("code", "country", "year", "population",
                        "overallscore", "goal1", "goal2", "goal3",
                        "goal4", "goal5", "goal6", "goal7", "goal8",
                        "goal9", "goal10", "goal11", "goal12",
                        "goal13", "goal14", "goal15", "goal16",
                        "goal17")

D1_0_SDG[["overallscore"]] <- as.double(gsub(",", ".",
                                             D1_0_SDG[["overallscore"]]))

makenumSDG <- function(D1_0_SDG) {
  for (i in 1:17) {
    varname <- paste("goal", i, sep = "")
    D1_0_SDG[[varname]] <- as.double(gsub(",", ".",
                                          D1_0_SDG[[varname]]))
  }
  return(D1_0_SDG)
}

D1_0_SDG <- makenumSDG(D1_0_SDG)

We proceed by examining the missing values.

Code
#### D1_0_SDG missing values ####

propmissing <- numeric(length(D1_0_SDG))

for (i in 1:length(D1_0_SDG)){
  proportion <- mean(is.na(D1_0_SDG[[i]]))
  propmissing[i] <- proportion
}
variable_names <- colnames(D1_0_SDG)
 
prop_missing_data <- data.frame(variable = variable_names, prop_missing = propmissing)

ggplot(prop_missing_data, aes(x = variable, y = prop_missing)) +
   geom_bar(stat = "identity", fill = "skyblue", color = "black") +
   labs(title = "NAs by columns in the main dataset",
        x = "Variable",
        y = "Proportion of Missing Values") +
   theme_minimal()+
   coord_flip()

Observing that the ‘population’ column contains numerous NAs, we investigate and discover that missing values are common, as some observations represent regions, not countries. Therefore, we can safely exclude these observations.

Code
#### D1_0_SDG missing values in population ####

SDG0 <- D1_0_SDG %>%
  group_by(code) %>%
  select(population) %>%
  summarize(NaPop = mean(is.na(population))) %>%
  filter(NaPop != 0)

ggplot(SDG0, aes(x = code, y = NaPop)) +
  geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
  labs(title = "NAs by row in population variable are for regions and not countries",
       x = "Code",
       y = "Proportion of Missing Values") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1))

D1_0_SDG <- D1_0_SDG %>%
  filter(!str_detect(code, "^_"))

Now, there are no missing values in the ‘population’ variable, and we observe that it contains information on 166 countries.

We notice that NAs are present in only three SDG scores: 1, 10, and 14. Additionally, when a country has NAs, they occur across all years or not at all. Consequently, we decide to conduct further investigations on these three SDG scores to determine whether to include them in our analysis.

For goal 1, there are only 9.04% missing values in 15 different countries. Goal 1 being “End poverty”, we decide to keep it and only remove the countries with no information for the analysis.

Code
#### SDG2 missing values ####

SDG2 <- D1_0_SDG |> 
  group_by(code) |> 
  select(contains("goal")) |> 
  summarize(Na1 = mean(is.na(goal1))) |>
  filter(Na1 != 0)
country_number <- length(unique(D1_0_SDG$country))
length(unique(SDG2$code))/country_number
#> [1] 0.0904

For goal 10, there are only 10.2% missing values in 17 different countries. Goal 10 being “reduced inequalities”, we decide to keep it and only remove the countries with no information for the analysis.

Code
#### SDG3 missing values ####

SDG3 <- D1_0_SDG |> 
  group_by(code) |> 
  select(contains("goal")) |> 
  summarize(Na10 = mean(is.na(goal10))) |>
  filter(Na10 != 0)

length(unique(SDG3$code))/country_number
#> [1] 0.102

For goal 14, there are 24.1% missing values in 40 different countries. Goal 14 being “life under water”, we decide not to keep it, because other SDG such as “life on earth” and “clean water” already treat similar subjects.

Code
#### SDG4 missing values ####

SDG4 <- D1_0_SDG |> 
  group_by(code) |> 
  select(contains("goal")) |> 
  summarize(Na14 = mean(is.na(goal14))) |>
  filter(Na14 != 0)

length(unique(SDG4$code))/country_number
#> [1] 0.241

D1_0_SDG <- D1_0_SDG %>% select(-goal14)

We will work with various datasets and merge them using the country code and year as key identifiers. To ensure accurate matching, we first verify that country names are encoded in UTF-8 format. Then, we standardize the names of the countries (requiring a custom match for Turkey) and the country codes, utilizing the countrycode library. Additionally, we compile a list of all country codes from the main database to filter the other datasets. Lastly, we complete the database to include all possible “country, year” combinations, ensuring the total number of rows remains unchanged.

Code
#### D1_0_SDG country code ####

D1_0_SDG$country <- stri_encode(D1_0_SDG$country, to = "UTF-8")

D1_0_SDG <- D1_0_SDG %>%
  mutate(country = countrycode(country, "country.name", "country.name",
                               custom_match = c("T�rkiye"="Turkey")))

D1_0_SDG$code <- countrycode(
  sourcevar = D1_0_SDG$code,
  origin = "iso3c",
  destination = "iso3c",
)

list_country <- c(unique(D1_0_SDG$code))

D1_0_SDG_country_list <- D1_0_SDG %>%
  filter(code %in% list_country) %>%
  select(code, country)

D1_0_SDG_country_list <- D1_0_SDG_country_list %>%
  select(code, country) %>%
  distinct()

Finally, we complete the database to ensure there are no missing pairs of (year, code).

Here are the first few lines of the cleaned dataset on SDG achievement scores:

For this first dataset, we reduced the size from 4,140 observations across 120 variables to 3,818 observations for 21 variables.

As said, this is now our main dataset. All subsequent datasets will be merged with this dataset. Therefore, for all the following datasets, we will make sure that we only keep data for the same countries and years as in this dataset. We have a total of 166 countries and the years range from 2000 to 2022.

2.3.1.2 Dataset on Unemployment rate

In this dataset, the initial step involves importing the data. Next, we ensure that the names and codes of the countries are formatted in UTF-8, preventing any discrepancies due to mismatches in country names. Following this, we modify the column names and filter the data to include only the relevant countries and years, specifically the years 2000 to 2022, covering 166 countries from our primary dataset.

Code
#### D2_1_Unemployment_rate pre-cleaning ####

D2_1_Unemployment_rate <-
  read.csv(here("scripts","data","UnemploymentRate.csv")) %>%
  mutate(
    country = iconv(ref_area.label, to = "UTF-8", sub = "byte"),
    country = countrycode(country, "country.name", "country.name"),
    year = time,
    `unemployment rate` = obs_value / 100,
    age_category = classif1.label,
    sex = sex.label
  ) %>%
  select(-ref_area.label, -time, -obs_value, -classif1.label,
         -sex.label, -source.label, -obs_status.label, -indicator.label) %>%
  merge(D1_0_SDG_country_list[, c("country", "code")],
        by = "country", all.x = TRUE) %>%
  filter(year >= 2000 & year <= 2022,
         !str_detect(sex, fixed("Male")) & !str_detect(sex, fixed("Female")),
         code %in% D1_0_SDG_country_list$code,
         age_category == "Age (Youth, adults): 15+") %>%
  select(code, country, year, `unemployment rate`) %>%
  distinct()

Here are the first few lines of the cleaned dataset on Unemployment rate:

For this first dataset, we reduced the size from 82,800 observations across 8 variables to 3,812 observations for 5 variables.

2.3.1.3 Dataset on GDP military Expenditures

We have three different databases which contain information on each countries over the years. Each year represent one variable. We want to extract three variables for our analysis: GDP per capita, military expenditures in percentage of the GDP and military expenditures in percentage of government expenditures.

Code
#### GDP per capita pre-cleaning ####

GDPpercapita <-
  read.csv(here("scripts","data","GDPpercapita.csv"),
           sep = ";")
MilitaryExpenditurePercentGDP <-
  read.csv(here("scripts","data","MilitaryExpenditurePercentGDP.csv"),
           sep = ";")
MiliratyExpenditurePercentGovExp <-
  read.csv(here("scripts","data","MiliratyExpenditurePercentGovExp.csv"),
           sep = ";")

After importing the data, we fill in the missing country codes using the column Indicator.Name, because we realized after some manipulations, that some of the country codes were false, but the next column contained the right ones.

Code
#### GDP per capita fill code ####

fill_code <- function(data){
  data <- data %>%
    mutate(Country.Code = ifelse(!grepl("^[A-Z]{3}$", Country.Code),
                                 Indicator.Name, Country.Code))
}

We create a set of functions that we will apply to each database. First, remove the variables that we don’t need, which are the years before 2000. Second, make sure that the values are numeric and rename the year variables (because they all had an “X” before year number). Third, transform the database from wide to long, in order to match the main database. Fourth, transform the year variable into an integer variable and rearrange and rename the columns to match the ones of the other databases. Then, we apply these transformations to the three databases.

Code
#### Useful functions ####

remove <- function(data){
  years <- seq(1960, 1999)
  removeyears <- paste("X", years, sep = "")
  data <- data[, !(names(data) %in% c("Indicator.Name",
                                      "Indicator.Code",
                                      "X",
                                      removeyears))]
}

makenum <- function(data) {
  for (i in 2000:2022) {
    year <- paste("X", i, sep = "")
    data[[year]] <- as.numeric(data[[year]])
  }
  return(data)
}

renameyear <- function(data) {
  for (i in 2000:2022) {
    varname <- paste("X", i, sep = "")
    names(data)[names(data) == varname] <- gsub("X", "", varname)
  }
  return(data)
}

wide2long <- function(data) {
  data <- pivot_longer(data, 
                       cols = -c("Country.Name",
                                 "Country.Code"), 
                       names_to = "year", 
                       values_to = "data")
  return(data)
}

yearint <- function(data) {
  data$year <- as.integer(data$year)
  return(data)
}

nameorder <- function(data) {
  colnames(data) <- c("country",
                      "code",
                      "year",
                      "data")
  data <- data %>% select(c("code",
                            "country",
                            "year",
                            "data"))
}

cleanwide2long <- function(data){
  data <- fill_code(data)
  data <- remove(data)
  data <- makenum(data)
  data <- renameyear(data)
  data <- wide2long(data)
  data <- yearint(data)
  data <- nameorder(data)
}

GDPpercapita <-
  cleanwide2long(GDPpercapita)
MilitaryExpenditurePercentGDP <-
  cleanwide2long(MilitaryExpenditurePercentGDP)
MiliratyExpenditurePercentGovExp <-
  cleanwide2long(MiliratyExpenditurePercentGovExp)

We rename the colums with the main information, standardize the country code and remove the countries that are not in our main database. We see that all the 166 countries are there.

Code
#### GDP per capita renamed and standardized ####

GDPpercapita <- GDPpercapita %>%
  rename(GDPpercapita = data)
MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
  rename(MilitaryExpenditurePercentGDP = data)
MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
  rename(MiliratyExpenditurePercentGovExp = data)

GDPpercapita$code <- countrycode(
  sourcevar = GDPpercapita$code,
  origin = "iso3c",
  destination = "iso3c",
)

MilitaryExpenditurePercentGDP$code <- countrycode(
  sourcevar = MilitaryExpenditurePercentGDP$code,
  origin = "iso3c",
  destination = "iso3c",
)

MiliratyExpenditurePercentGovExp$code <- countrycode(
  sourcevar = MiliratyExpenditurePercentGovExp$code,
  origin = "iso3c",
  destination = "iso3c",
)

GDPpercapita <- GDPpercapita %>%
  filter(code %in% list_country)
length(unique(GDPpercapita$code))
#> [1] 166

MilitaryExpenditurePercentGDP <- MilitaryExpenditurePercentGDP %>%
  filter(code %in% list_country)
length(unique(MilitaryExpenditurePercentGDP$code))
#> [1] 166

MiliratyExpenditurePercentGovExp <- MiliratyExpenditurePercentGovExp %>%
  filter(code %in% list_country)
length(unique(MiliratyExpenditurePercentGovExp$code))
#> [1] 166

There were only 157 countries that were both in the main SDG dataset and in these 3 datasets, but we suspected that some of the missing countries were in the database but not rightly matched. Indeed, Bahamas was in the database but instead of the code “BHS” there was “The”, for “COD” it was “Dem. Rep.”, for “COG” it was “Rep”, etc. We remarked that the code is in another column of the initial database: “Indicator.Name”. We went back to the initial database and before cleaning it we put the right codes (as seen above) and after rerunning the code we see that we have all our 166 countries from the initial dataset.

Code
#### Missing countries ####

list_country_GDP <- c(unique(GDPpercapita$code))
setdiff(list_country, list_country_GDP)
#> character(0)
Code
#### Pre-cleaned datasets on GDP per capita ####

D3_1_GDP_per_capita <- GDPpercapita
D3_2_Military_Expenditure_Percent_GDP <- MilitaryExpenditurePercentGDP
D3_3_Miliraty_Expenditure_Percent_Gov_Exp <- MiliratyExpenditurePercentGovExp

Here are the first few lines of the cleaned dataset of GDP per capita:

For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.

Here are the first few lines of the cleaned dataset of military expenditures in percentage of GDP:

For this dataset, we went from ??? observations for 68 variables to 3818 observations for 4 varibles.

Here are the first few lines of the cleaned dataset of military expenditures in percentage of government expenditures:

2.3.1.4 Dataset on internet usage

To prepare the dataset on internet usage in the world to be merge with the other data, we first, import the data. Then, we keep only the year that we are interested in (2000 to 2022). We also rename the column and keep only the country that match the list of the countries in the main dataset on the SDG.

Code
#### Internet usage pre-cleaning ####

D4_0_Internet_usage <- read.csv(here("scripts", "data", "InternetUsage.csv")) %>%
  filter(Year >= 2000, Year <= 2022) %>%
  rename(
    code = Code,
    country = Entity,
    year = Year,
    internet_usage = Individuals.using.the.Internet....of.population.
  ) %>%
  mutate(internet_usage = internet_usage / 100) %>%
  filter(code %in% list_country) %>%
  select(code, country, year, internet_usage)

Here are the first few lines of the cleaned dataset of internet usage:

For this first dataset, we reduced the size from 6,570 observations across 4 variables to 3,433 observations for 4 variables.

2.3.1.5 Dataset on human freedom index

After importing the data from the CATO Institute website, we noticed that even if the file was called “Human Freedom Index 2022”, the available observations were only going from 2000 up to 2020. We have decided first to modify it in order to match our other datasets, by renaming/encoding/standardizing the columns containing the country names.

Code
#### Human Freedom Index pre-cleaning 1 ####

data <- read.csv(here("scripts", "data", "human-freedom-index-2022.csv"))

#data in tibble 
datatibble <- tibble(data)

# Rename the column countries into country to match the other datbases
names(datatibble)[names(datatibble) == "countries"] <- "country"

# Make sure the encoding of the country names are UTF-8
datatibble$country <- iconv(datatibble$country, to = "UTF-8", sub = "byte")

# standardize country names
datatibble <- datatibble %>%
  mutate(country = countrycode(country, "country.name", "country.name"))

Once done, we could verify which countries were or were not present between these observations and our main SDG dataset. We have decided to keep the ones that were matching between the two datasets.

Code
#### Human Freedom Index pre-cleaning 2 ####

# Merge by country name
datatibble <- datatibble %>%
  left_join(D1_0_SDG_country_list, by = "country")

datatibble <- datatibble %>% filter(code %in% list_country)
(length(unique(datatibble$code)))
#> [1] 159

# See which ones are missing
list_country_free <- c(unique(datatibble$code))
setdiff(list_country, list_country_free)
#> [1] "AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB"

# Turkey was missing but present in the initial database (it was a problem
# when standardizing the country names of D1_0SDG_country_list
#that we corrected) and the other missing countries are:
#"AFG" "CUB" "MDV" "STP" "SSD" "TKM" "UZB" 
D5_0_Human_freedom_index <- datatibble

Then, we noticed that there were a lot of columns that were not important for us, as we had 141 variables taken into account. So we have decided to keep the ones that refers to the countries informations (such as code, year, ..) and their human freedom scores per category (pf for personnal freedom, ef for economical freedom).

Code
#### Human Freedom Index pre-cleaning 3 ####

# Erasing useless columns to keep only the general ones. 
D5_0_Human_freedom_index <- select(D5_0_Human_freedom_index, year, country,
                                   region, hf_score, pf_rol, pf_ss,
                                   pf_movement, pf_religion, pf_assembly,
                                   pf_expression, pf_identity, pf_score,
                                   ef_government, ef_legal, ef_money, ef_trade,
                                   ef_regulation, ef_score, code)

D5_0_Human_freedom_index <- D5_0_Human_freedom_index %>%
  rename(
    pf_law = names(D5_0_Human_freedom_index)[5],      # Renames the 5th column to "pf_law"
    pf_security = names(D5_0_Human_freedom_index)[6]  # Renames the 6th column to "pf_security"
  )

Here are the first few lines of the partialy cleaned dataset on Human Freedom Index scores:

For this first dataset, we reduced the size from 3’465 observations across 141 variables to 3339 observations for 4 variables.

2.3.1.6 Dataset on Disasters

For this dataset concerning the Disasters we imported the data from Kaggle as we couldn’t find the original dataset that is private coming from the EOSDIS SYSTEM, an interactive interface for browsing full-resolution, global, daily satellite images from NASA. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.

Code
#### Disasters pre-cleaning 1 ####

Disasters <- read.csv(here("scripts", "data", "Disasters.csv")) %>%
  select(Year, Country, ISO, Location, Continent, Disaster.Subgroup,
         Disaster.Type, Total.Deaths, No.Injured, No.Affected, No.Homeless,
         Total.Affected, Total.Damages...000.US..)

Because we knew that our file showed all the disasters in each country over the years (between 1970-2021) and that we wanted to focus on a specific period, we filtered our data to show the years between 2000 and 2022. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets.

Code
#### Disasters pre-cleaning 2 ####

# Rearrange the columns, changed the type of data, renamed the columns
Rearanged_Disasters <- Disasters %>%
  filter(Year >= 2000 & Year <= 2022) %>%
  mutate(
    code = as.character(ISO),
    country = as.character(Country),
    year = as.integer(Year),
    continent = as.character(Continent),
    disaster.subgroup = as.character(Disaster.Subgroup),
    disaster.type = as.character(Disaster.Type),
    location = as.character(Location),
    total.deaths = as.numeric(Total.Deaths),
    no.injured = as.numeric(No.Injured),
    no.affected = as.numeric(No.Affected),
    no.homeless = as.numeric(No.Homeless),
    total.affected = as.numeric(Total.Affected),
    total.damages = as.numeric(Total.Damages...000.US..)
  )

We then grouped the data by “year”, “code”, “country” and “continent” and summarize the data. Here you can see that we re-selected specific columns as we saw that our first pre-selection was still too wide and some variables as the disaster.subgroup and disaster.type weren’t pertinent.We arranged the columns based on “code,” “country,” “year,” and “continent” to match the other datasets.

Code
#### Disasters pre-cleaning 3 ####

Disasters <- Rearanged_Disasters %>%
  group_by(year,code, country, continent) %>%
  summarize(
    total_deaths = sum(total.deaths, na.rm = TRUE),
    no_injured = sum(no.injured, na.rm = TRUE),
    no_affected = sum(no.affected, na.rm = TRUE),
    no_homeless = sum(no.homeless, na.rm = TRUE),
    total_affected = sum(total.affected, na.rm = TRUE),
    total_damages = sum(total.damages, na.rm = TRUE)
  ) 

D6_0_Disasters <- Disasters %>%
  select(code, country, year, continent, total_deaths, no_injured, no_affected,
         no_homeless, total_affected, total_damages) %>%
  arrange(code, country, year, continent)

Finally we filtered our disasters data to keep only the countries that are present in our main dataset. We analysed the missing countries and identified three countries (BHR, BRN, MLT) that are unexpectedly missing.

Code
#### Disasters pre-cleaning 4 ####

D6_0_Disasters <- D6_0_Disasters %>% filter(code %in% list_country)
length(unique(D6_0_Disasters$code))
#> [1] 163

# Here we see which countries are missing
list_country_disasters <- c(unique(D6_0_Disasters$code))
setdiff(list_country, list_country_disasters)
#> [1] "BHR" "BRN" "MLT"

Here are the first few lines of the cleaned dataset on Disasters:

2.3.1.7 Dataset on COVID

This dataset contains information on the COVID19 pandemic between 2020 and 2022. The observation are by year, month, day. After importing the database, we transform the date in format YYYY-MM-DD in order to only keep the year.

Code
#### COVID pre-cleaning 1 ####

COVID <- read.csv(here("scripts", "data", "COVID.csv")) %>%
  select(iso_code, location, date, new_cases_per_million,
         new_deaths_per_million, stringency_index) %>%
  mutate(date = as.integer(year(date)))

We perform a first round of investigation of the missing values before aggregating the values by year. We begin with the variables “cases per million” and “deaths per million”: seeing that for each country, we have either only missing values, either a very low percentage of missing values (~1%), we can compute the sum over each year and ignore the missing values without altering the data. Indeed, where all the values are missing, the computation will return a NA. We then look at the “stringency” variable and we have 3 scenarios:

  1. ~20% of missing values: we ignore missing values when computing the mean to have an idea of stringency each year (because we compute the mean stringency over the year, if some days are missing, it is not a problem, it can not evoluate that fast).

  2. all are missing: we can ignore the missing values when computing the mean, because it will still return a missing value

  3. almost all are missing: here the mean doesn’t make sense -> we will replace the values by NAs to be coherent. The countries with this issues are: ERI, GUM, PRI and VIR. We verify if they are in our main dataset and since none of these countries are, we can ignore the issue, the lines will be remove later anyway.

We aggregate the observations of all days of a year in one observation per country using the mean.

Code
#### COVID missing values ####

COVID1 <- COVID %>%
  group_by(iso_code) %>%
  summarize(NaDeaths = round(mean(is.na(new_deaths_per_million)),3),
            NaCases = round(mean(is.na(new_cases_per_million)), 3),
            NaStringency = round(mean(is.na(stringency_index)), 3)) %>%
  pivot_longer(cols = starts_with("Na"),
               names_to = "Variable",
               values_to = "NaValue")%>%
  filter(NaValue!=0)

issue_list <- c("ERI",
                "GUM",
                "PRI",
                "VIR")
is.element(issue_list, list_country)
#> [1] FALSE FALSE FALSE FALSE

COVID <- COVID %>%
  group_by(location, date) %>%
  mutate(
    cases_per_million = sum(new_cases_per_million, na.rm = TRUE),
    deaths_per_million = sum(new_deaths_per_million, na.rm = TRUE),
    stringency = mean(stringency_index, na.rm = TRUE)
  )%>%
  ungroup()

Now that all the variables of interest are aggregated by year, we remove all the variables that we don’t need and rename all the remaining variables to match the main dataset.

Code
#### COVID renaming ####

COVID <- COVID %>%
  group_by(location, date) %>%
  distinct(date, .keep_all = TRUE) %>%
  ungroup()

COVID <- COVID %>%
  select(-c(new_cases_per_million, new_deaths_per_million, stringency_index))

colnames(COVID) <- c("code",
                     "country",
                     "year",
                     "cases_per_million",
                     "deaths_per_million",
                     "stringency")

We remove the years that exceed 2022, we make sure that the country codes are all iso codes with 3 letters (we observe that sometimes they are preceded by “OWID_”) and we standardize the country codes.

Code
#### COVID years and code cleaning ####

COVID <- COVID[COVID$year <= 2022, ]

COVID$code <- gsub("OWID_", "", COVID$code)

COVID$code <- countrycode(
  sourcevar = COVID$code,
  origin = "iso3c",
  destination = "iso3c"
)

We remove the observations of countries that aren’t in our main dataset on SDGs and find that all the 166 countries that we have in the main SDG dataset are also in this one.

Code
#### COVID pre-cleaned dataset ####

D7_0_COVID <- COVID %>%
  filter(code %in% list_country)
length(unique(COVID$code))
#> [1] 238

Here are the first few lines of the cleaned dataset on COVID19:

2.3.1.8 Dataset on Conflicts

For our conflicts dataset, we imported the data from “The World Banck” data catalog. Once we made sure that our file called “Disasters” was convert into a data frame, we selected some specific columns that we where interested in.

Code
#### Conflicts dataset ####

Conflicts <- read.csv(here("scripts", "data", "Conflicts.csv")) %>%
  as.data.frame() %>%
  select(year, country, ongoing, gwsum_bestdeaths, pop_affected, 
         peaceyearshigh, area_affected, maxintensity, maxcumulativeintensity)

Our file showed all the Conflicts and consequences per country over the years (between 2000-2016). We couldn’t find a better and more complete dataset, As we consider conflicts as events, we will only take into account results between 2000 and 2016. Then we rearranged our data, changing the data types of all the columns and their names in order to match our other datasets. We grouped the data by ” year”, “country”, re-selected some variables and summarize the data.

Code
#### Conflicts rearranging 1 ####

Rearanged_Conflicts <- Conflicts %>%
  filter(year >= 2000 & year <= 2022)%>%
  mutate(
    ongoing = as.integer(ongoing),
    country = as.character(country),
    year = as.integer(year),
    gwsum_bestdeaths = as.numeric(gwsum_bestdeaths),
    pop_affected = as.numeric(pop_affected),
    area_affected = as.numeric(area_affected),
    maxintensity = as.numeric(maxintensity),
    )

# Group the data by "year", "country" and summarize the data
Conflicts <- Rearanged_Conflicts %>%
  group_by(year, country) %>%
  summarize(
    ongoing = sum (ongoing, na.rm = TRUE),
    sum_deaths = sum(gwsum_bestdeaths, na.rm = TRUE),
    pop_affected = sum(pop_affected, na.rm = TRUE),
    area_affected = sum(area_affected, na.rm = TRUE),
    maxintensity = sum(maxintensity, na.rm = TRUE),
  )

After we Selected specific columns from the summarized data and arrange the data by our specified columns. To make our dataset compatible with the main one and let the merging face succeed, we dd some adjustment concerning the country names’ to ensure the compatibility. Then we standardize and merge by country names to finally rearrange the data to retain only the countries present in our main dataset. Note that in the end we can see that only one country is missing that wasn’t in the initial conflicts database: BLR

Code
#### Conflicts rearranging 2 ####

conflicts <- Conflicts %>%
  select(country, year, ongoing, sum_deaths,
         pop_affected, area_affected, maxintensity) %>%
  arrange(country, year)

conflicts$country <- iconv(conflicts$country, to = "UTF-8", sub = "byte")

conflicts <- conflicts %>%
  mutate(country = countrycode(country, "country.name", "country.name"))

conflicts <- conflicts %>%
  left_join(D1_0_SDG_country_list, by = "country")

conflicts <- conflicts %>%
  select(code, country, year, ongoing, sum_deaths,
         pop_affected, area_affected, maxintensity) %>%
  arrange(code, country, year)


D8_0_Conflicts <- conflicts %>%
  filter(code %in% list_country)
(length(unique(conflicts$code)))
#> [1] 166

# See which countries are missing
list_country_conflicts <- c(unique(conflicts$code))
setdiff(list_country, list_country_conflicts)
#> [1] "BLR"

Here are the first few lines of the cleaned dataset on Conflicts:

2.3.1.9 Merging our dataset

By merging our eight pre-cleaned datasets, we create a final database.

Code
#### Pre-cleaned datasets merged ####

D2_1_Unemployment_rate$country <- NULL
merge_1_2 <- D1_0_SDG |> left_join(D2_1_Unemployment_rate,
                                   join_by(code, year))

D3_1_GDP_per_capita$country <- NULL
merge_12_3 <- merge_1_2 |> left_join(D3_1_GDP_per_capita,
                                     join_by(code, year))

D3_2_Military_Expenditure_Percent_GDP$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_2_Military_Expenditure_Percent_GDP,
                                      join_by(code, year)) 

D3_3_Miliraty_Expenditure_Percent_Gov_Exp$country <- NULL
merge_12_3 <- merge_12_3 |> left_join(D3_3_Miliraty_Expenditure_Percent_Gov_Exp,
                                      join_by(code, year)) 

D4_0_Internet_usage$country <- NULL
merge_123_4 <- merge_12_3 |> left_join(D4_0_Internet_usage,
                                       join_by(code, year)) 

D5_0_Human_freedom_index$country <- NULL
merge_1234_5 <- merge_123_4 |> left_join(D5_0_Human_freedom_index,
                                         join_by(code, year)) 

D6_0_Disasters$country <- NULL
merge_12345_6 <- merge_1234_5 |> left_join(D6_0_Disasters,
                                           join_by(code, year)) 

D7_0_COVID$country <- NULL
D7_0_COVID <- D7_0_COVID |> distinct(code, year, .keep_all = TRUE)
merge_123456_7 <- merge_12345_6 |> left_join(D7_0_COVID,
                                             join_by(code, year)) 

D8_0_Conflicts$country <- NULL
all_Merge <- merge_123456_7 |> left_join(D8_0_Conflicts,
                                         join_by(code, year)) 

2.3.2 Cleaning of the final database

2.3.2.1 Filing missing continents and regions colomns

When we merged our dataset, we noticed that some countries were not assigned their corresponding continents and/or region. This issue arose because we sourced the continent and region data from secondary databases, not from our main one. We now add this the corresponding missing continents and regions.

Code
#### Filling missing continents and regions ####

# Update all_Merge with region and continent information
all_Merge <- all_Merge %>%
  group_by(country) %>%
  mutate(
    continent = ifelse(is.na(continent), first(na.omit(continent)), continent),
    region = ifelse(is.na(region), first(na.omit(region)), region)
    ) %>%
  ungroup() %>%
  mutate(continent = case_when(
    code %in% c("BHR") ~ "Asia",
    code %in% c("BRN") ~ "Asia",
    code %in% c("MLT") ~ "Europe",
      TRUE ~ continent
    ), 
    region = case_when(
    code %in% c("AFG", "MDV") ~ "South Asia",
    code %in% c("CUB") ~ "Latin America & the Caribbean",
    code %in% c("STP", "SSD") ~ "Sub-Saharan Africa",
    code %in% c("TKM", "UZB") ~ "Caucasus & Central Asia",
      TRUE ~ region))

We order the database, beginning by the information on the country, the year, the continent and the region.

Code
#### Ordering the database and saving it as .CSV ####

all_Merge <- as.data.frame(all_Merge) %>%
  select(code, year, country, continent, region, everything())

write.csv(all_Merge, file = here("scripts","data","all_Merge.csv"))

Here are the first few lines of the final dataset:

Final structure of our merged database: each country of the 166 countries from D1_1_SDG are observed each year from 2000 to 2022, thus each row has a key composed of (code, year) that uniquely identifies an observation. The other columns are the variables listed above. Due to some countries having a lot of missing information we will have to eliminate some of them, but we will still have more than 2000 rows in our database.

2.3.3 Treatment of missing values

We load our final database and we visualize the missing values.

Code
#### Loading the final database to be cleaned ####

all_Merge <- read.csv(here("scripts","data","all_Merge.csv"))

# Remove unnecessary column
all_Merge <- all_Merge %>% select(-c(X))

# Create a dataframe with the goals without NAs summarize in one column to
#simplify the visualization
goal_vars <- all_Merge %>%
  select(starts_with("goal")) %>%
  filter_all(all_vars(!is.na(.))) %>%
  colnames()
to_plot_missing <- all_Merge %>%
  mutate(Goals_without_NAs = rowSums(!is.na(select(., all_of(goal_vars))))) %>%
  select(-c(goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9,
            goal11, goal12, goal13, goal15, goal16, goal17))

vis_dat(to_plot_missing, warn_large_data = FALSE) +
  scale_fill_brewer(palette = "Paired") +
  theme(
    axis.text.x = element_text(angle = 90, size = 6),
    legend.text = element_text(size = 8),  # Adjust the size of legend text
    legend.title = element_text(size = 10) 
  )

For each of our research question, we will start with the merged data set and deal with the missing value separately. This will allow us to not delete observations when we do not need to.

For question 1, we only keep the years until 2020, because most of the explanatory variables that we want to use (those coming from the human freedom index) only have values until 2020.

Code
#### Cleaning the database for question 1 ####

data_question1 <- all_Merge %>%
  filter(year<=2020) %>%
  select(-c(total_deaths, no_injured, no_affected, no_homeless, total_affected,
            total_damages, cases_per_million, deaths_per_million, stringency,
            ongoing, sum_deaths, pop_affected, area_affected, maxintensity))

For question 2 and 4, we use the main data from the SDG database.

Code
#### Cleaning the database for question 2 and 4 ####

data_question24 <- all_Merge %>%
  select(c(code, year, country, continent, region, overallscore, goal1, goal2,
           goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
           goal12, goal13, goal15, goal16, goal17))

For question 3, we create 3 distinct databases according to the different type of event that we will analyse: disasters, COVID19 and conflicts. For the disasters, we only keep the years until 2021, because after this date, we don’t have data, moreover we decided to delete the country Bahrain, Brunei and Malta as we do not have any data concerning them. For the conflicts, we only keep the years until 2016, because after this date, we don’t have data. Concerning the conflict dataset, we decided to erase Belarus because once again we do not have any data concerning this country.

Code
# Disasters
data_question3_1 <- all_Merge %>%
  filter(year<=2021 & code!="BHR" & code!="BRN" & code!="MLT") %>%
  select(c(code, year, country, continent, region, overallscore, goal1, goal2,
           goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
           goal12, goal13, goal15, goal16, goal7, total_deaths, no_injured,
           no_affected, no_homeless, total_affected, total_damages))

# COVID
data_question3_2 <- all_Merge %>%
  select(c(code, year, country, continent, region, overallscore, goal1, goal2,
           goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
           goal12, goal13, goal15, goal16, goal7, cases_per_million,
           deaths_per_million, stringency))

# Conflicts 
data_question3_3 <- all_Merge %>%
  filter(year<=2016 & code !="BLR") %>%
  select(c(code, year, country, continent, region, overallscore, goal1, goal2,
           goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11,
           goal12, goal13, goal15, goal16, goal7, ongoing, sum_deaths,
           pop_affected, area_affected, maxintensity))

Data for question 1

Dealing with missing values in colomns

We begin by visualizing the missing values. To have a less messy graph we group all the goals wihtout NAs into one single variable. We decide to remove MilitaryExpenditurePercentGovExp, because it has too many missing values and it contains similar information to MilitaryExpenditurePercentGDP.We also remove hf_score, pf_score and ef_score, because there are many missing values and since these variables summarize the other ones, deleting them will not make us loose information.

Code
# Create a dataframe with the goals without NAs summarize in one column to simplify the visualization
variable_names <- names(data_question1)
missing_percentages <-
  sapply(data_question1, function(col) mean(is.na(col)) * 100)

missing_data_summary <- data.frame(
  Variable = variable_names,
  Missing_Percentage = missing_percentages
)

missing_data_summary <- missing_data_summary %>%
  mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))

ggplot(data = missing_data_summary,
       aes(x = reorder(VariableGroup, Missing_Percentage),
           y = Missing_Percentage,
           fill = Missing_Percentage)) +
  geom_bar(stat = "identity") +
  geom_text(aes(label = ifelse(Missing_Percentage > 1,
                               sprintf("%.1f%%", Missing_Percentage),
                               ""),
                y = Missing_Percentage),
            position = position_stack(vjust = 1),  # Adjust vertical position
            color = "white",  # Text color
            size = 2,          # Text size
            hjust = 1.05) +
  labs(title = "Percentage of Missing Values by Variable",
       x = "Variable",
       y = "Missing Percentage") +
  theme_minimal() +
  theme(axis.text.y = element_text(hjust = 1, size=6 ),
        legend.text = element_text(size = 8),
        legend.title = element_text(size = 10)) +
  labs(fill = "% NAs") +
  coord_flip()

data_question1 <- data_question1 %>% select(-c(MiliratyExpenditurePercentGovExp,
                                               hf_score, pf_score, ef_score))

Dealing with missing vlaues in rows

We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. We decide to remove the countries that have more than 50 missing values.

Code
see_missing1_1 <- data_question1 %>%
  group_by(code) %>%
  summarise(across(-c(year, country, continent, region, population,
                      overallscore, goal1, goal2, goal3, goal4, goal5, goal6,
                      goal7, goal8, goal9, goal10, goal11, goal12, goal13,
                      goal15, goal16, goal17), 
                   ~ sum(is.na(.))) %>%
              mutate(num_missing = rowSums(across(everything()))) %>%
              filter(num_missing > 50))

data_question1 <- data_question1 %>% filter(!code %in% see_missing1_1$code)

Here is the graph that allows us to visualize the countries that have missing values, how many and for which variables, when there are more than 50 NAs in total.

Code
ggplot(see_missing1_1, aes(x = num_missing,
                           y = reorder(code, num_missing),
                           fill = num_missing)) +
    geom_bar(stat = "identity") + 
    scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
    theme_minimal() +
  theme(axis.text.y = element_text(hjust = 1, size=8 ),
        legend.text = element_text(size = 8),
        legend.title = element_text(size = 10)) +
    labs(title = "Number of missing values per country containing at least 50 NAs",
         x = "Number of Missing Values",
         y = "Countries")

Code
see_missing1_2 <- data_question1 %>%
  group_by(code) %>%
  summarise(across(-c(year, country, continent, region, population,
                      overallscore, goal1, goal2, goal3, goal4, goal5, goal6,
                      goal7, goal8, goal9, goal10, goal11, goal12, goal13,
                      goal15, goal16, goal17),
                   ~ sum(is.na(.))) %>%
              mutate(num_missing = rowSums(across(everything()))) %>%
              filter(num_missing > 0))

Here is the ggplot that helps us to visualize the countries that have missing values after removing the countries with more than 50 NAs.

Code
ggplot(see_missing1_2, aes(x = num_missing ,
                           y = reorder(code, num_missing),
                           fill = num_missing)) +
    geom_bar(stat = "identity", width = 0.5) + 
    scale_fill_gradient(low = "lightgreen", high = "darkgreen") +
    theme_minimal() +
  theme(axis.text.y = element_text(hjust = 1, size= 6 ),
        legend.text = element_text(size = 8),
        legend.title = element_text(size = 10)) +
        labs(title = "Number of missing values per country",
             x = "Number of Missing Values",
             y = "Countries")

We also look at patterns of missing values in the rows and see that except for the two goals with NAs that we discussed earlier and for the triplet “ef_money”, “ef_trade” and “ef_regulation” there are not well defined patterns. We removes the countries that have NAs in the three variables mentioned at the same time.

Code
naniar::gg_miss_upset(data_question1, nsets=10, nintersects=11)

data_question1 <- data_question1[rowSums(is.na(data_question1[, c("ef_money",
                                                                  "ef_trade",
                                                                  "ef_regulation")])) < 3, ]

data_question1 <- data_question1 %>%
  group_by(code) %>%
  filter(all(2000:2020 %in% year)) %>%
  ungroup()

GDP per capita

Only Venezuela has missing values that we can not fill (because the evolution over time is not linear), so we delete the country.

Code
question1_missing_GDP <- data_question1 %>%
  group_by(code) %>%
  summarize(NaGDPpercapita = mean(is.na(GDPpercapita)))%>%
  filter(NaGDPpercapita != 0)

data_question1 <- data_question1 %>% filter(code!="VEN")
Military expenditure in % of GDP

For MilitaryExpenditurePercentGDP, We plot the evolution of MilitaryExpenditurePercentGDP along the years for each country containing missing values and distinguish the percentage of missing values with colors.

Code
MilitaryExpenditurePercentGDP1 <- data_question1 %>%
  group_by(code) %>%
  summarize(NaMil1 = round(mean(is.na(MilitaryExpenditurePercentGDP)),3)) %>%
  filter(NaMil1 != 0)

filtered_data_Mil1 <- MilitaryExpenditurePercentGDP %>%
  filter(code %in% MilitaryExpenditurePercentGDP1$code) # countries with NAs

filtered_data_Mil1 <- filtered_data_Mil1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
  ungroup()

Evol_Missing_Mil1 <- ggplot(data = filtered_data_Mil1) +
  geom_line(aes(x = year,
                y = MilitaryExpenditurePercentGDP, 
                 color = cut(PercentageMissing,
                             breaks = c(0,
                                        0.1,
                                        0.2,
                                        0.3,
                                        1),
                             labels = c("0-10%",
                                        "10-20%",
                                        "20-30%",
                                        "30-100%")))) +
  labs(title = "Military expenditure in % of GDP over time",
       x = "Years from 2000 to 2022",
       y = "GDP per capita") +
  scale_color_manual(values = c("0-10%" = "blue",
                                "10-20%" = "green",
                                "20-30%" = "red",
                                "30-100%" = "black"),
                     labels = c("0-10%",
                                "10-20%",
                                "20-30%",
                                "30-100%")) +
  guides(color = guide_legend(title = "% missings")) +
  facet_wrap(~ code, nrow = 5) +
  theme(strip.text = element_text(size = 6)) +
  scale_x_continuous(breaks = NULL) +
  scale_y_continuous(breaks = NULL)

print(Evol_Missing_Mil1)

We delete the countries with more than 30% of missing values and for the countries with less than 30% of missing values and a linear evolution in time, we fill the missing values using linear interpolation.

Code
data_question1 <- data_question1 %>% filter(code!="ARE" &
                                              code!="BHS" &
                                              code!="BRB" &
                                              code!="CRI" &
                                              code!="HTI" &
                                              code!="ISL" &
                                              code!="PAN" &
                                              code!="SYR" &
                                              code!="VNM") 

list_code <- c("BDI", "BEN", "CAF", "CIV", "COD",
               "GAB", "NER", "TGO", "TTO", "ZMB")

for (i in list_code) {
  country_data <- data_question1 %>%
    filter(code == i)
  interpolated_data <- na.interp(country_data$MilitaryExpenditurePercentGDP)
  data_question1[data_question1$code == i, "MilitaryExpenditurePercentGDP"] <- interpolated_data
}

Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the remaining missing values, where there are less than 30% missing using the median by region.

Code
question1_missing_Military <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(MilitaryExpenditurePercentGDP))) %>% # Column % NAs
  ungroup() %>%
  group_by(region) %>%
  filter(sum(PercentageMissing, na.rm = TRUE) > 0)

Freq_Missing_Military <- ggplot(data = question1_missing_Military) +
  geom_histogram(aes(x = MilitaryExpenditurePercentGDP, 
                     fill = cut(PercentageMissing,
                                breaks = c(0,
                                           0.1,
                                           0.2,
                                           0.3,
                                           1),
                                labels = c("0-10%",
                                           "10-20%",
                                           "20-30%",
                                           "30-100%"))),
                 bins = 30) +
  labs(title = "Distribution of Military expenditures in % of GDP",
       x = "Military expenditures in % of GDP",
       y = "Frequency") +
  scale_fill_manual(values = c("0-10%" = "blue",
                               "10-20%" = "green",
                               "20-30%"="red",
                               "30-100%" = "black"),
                    labels = c("0-10%",
                               "10-20%",
                               "20-30%",
                               "30-100%")) +
  guides(fill = guide_legend(title = "% missings")) +
  facet_wrap(~ region, nrow = 1)

print(Freq_Missing_Military)

data_question1 <- data_question1 %>%
  group_by(code) %>%
  mutate(
    PercentageMissingByCode = mean(is.na(MilitaryExpenditurePercentGDP))
  ) %>%
  ungroup() %>%  
  group_by(region) %>%
  mutate(
    MedianByRegion = median(MilitaryExpenditurePercentGDP, na.rm = TRUE),
    MilitaryExpenditurePercentGDP = ifelse(
      PercentageMissingByCode < 0.3 & !is.na(MilitaryExpenditurePercentGDP),
      MilitaryExpenditurePercentGDP,
      ifelse(PercentageMissingByCode < 0.3, MedianByRegion, MilitaryExpenditurePercentGDP)
    )
  ) %>%
  select(-PercentageMissingByCode, -MedianByRegion)

Internet usage

There are only low percentage of missing values.

Code
question1_missing_Internet <- data_question1 %>%
  group_by(code) %>%
  summarize(NaInternet = mean(is.na(internet_usage)))%>%
  filter(NaInternet != 0)

There are never more than 30% of NAs. We look at the evolution of the variable over time. We fill the missing values with linear interpolation, because all are increasing in time and they are almost straight lines, except for CIV that we delete.

Code
question1_missing_Internet <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(internet_usage))) %>% # Column % NAs
  filter(code %in% question1_missing_Internet$code)

Evol_Missing_Internet <- ggplot(data = question1_missing_Internet) +
  geom_line(aes(x = year,
                y = internet_usage, 
                 color = cut(PercentageMissing,
                             breaks = c(0,
                                        0.1, 0.2, 0.3, 1),
                             labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
  labs(title = "Evolution of internet usage over time", x = "Years from 2000 to 2022", y = "Internet usage") +
  scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
                     labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
  guides(color = guide_legend(title = "% missings")) +
  scale_x_continuous(breaks=NULL)+
  facet_wrap(~ code, nrow = 4)

print(Evol_Missing_Internet)

list_code <- setdiff(unique(question1_missing_Internet$code), "CIV")
for (i in list_code) {
  country_data <- data_question1 %>% filter(code == i)
  interpolated_data <- na.interp(country_data$internet_usage)
  data_question1[data_question1$code == i, "internet_usage"] <- interpolated_data
}

data_question1 <- data_question1 %>% filter(code!="CIV")

Human freedom index
Personal freedom: law

The variable pf_law has (many) NAs, but only for one country: BLZ, so we decide to remove it.

Code
data_question1 <- data_question1 %>%
  filter(code!="BLZ")
Economic freedom: government

There are no more missing values, thanks to the previous steps.

Economic freedom: money

5 countries have missing values, but the percentage of missing values is always below 25%.

Code
question1_missing_ef_money <- data_question1 %>%
  group_by(code) %>%
  summarize(Na_ef_money = mean(is.na(ef_money)))%>%
  filter(Na_ef_money != 0)

We look at the evolution of the variable over time, and for the countries with a linear evolution in time, we fill the missing values using linear interpolation.

Code
question1_missing_ef_money <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
  filter(code %in% question1_missing_ef_money$code)

Evol_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
  geom_line(aes(x = year, y = ef_money, 
                 color = cut(PercentageMissing,
                             breaks = c(0, 0.1, 0.2, 0.3, 1),
                             labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
  labs(title = "Evolution of economic freedom: money over time", x = "Years from 2000 to 2022", y = "ef_money") +
  scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
                     labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
  guides(color = guide_legend(title = "% missings")) +
  facet_wrap(~ code, nrow = 2) +
  scale_y_continuous(limits = c(0, 10))

print(Evol_Missing_ef_money)

list_code <- c("GEO", "MKD")
for (i in list_code) {
  country_data <- data_question1 %>% filter(code == i)
  interpolated_data <- na.interp(country_data$ef_money)
  data_question1[data_question1$code == i, "ef_money"] <- interpolated_data
}

Then, we look at the distribution of the variable per region. Seeing that all are skewed distributions, we decide to replace the missing values using the median by region.

Code
question1_missing_ef_money <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(ef_money))) %>% # Column % NAs
  ungroup() %>%
  group_by(region) %>%
  filter(sum(PercentageMissing, na.rm = TRUE) > 0)

Freq_Missing_ef_money <- ggplot(data = question1_missing_ef_money) +
  geom_histogram(aes(x = ef_money, 
                     fill = cut(PercentageMissing,
                                breaks = c(0, 0.1, 0.2, 0.3, 1),
                                labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
                 bins = 30) +
  labs(title = "Distribution of economic freedom: money", x = "ef_money", y = "Frequency") +
  scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
  guides(fill = guide_legend(title = "% missings")) +
  facet_wrap(~ region, nrow = 1)

print(Freq_Missing_ef_money)

data_question1 <- data_question1 %>%
  group_by(code) %>%
  mutate(
    PercentageMissingByCode = mean(is.na(ef_money))
  ) %>%
  ungroup() %>% 
  group_by(region) %>%
  mutate(
    MedianByRegion = median(ef_money, na.rm = TRUE),
    ef_money = ifelse(
      PercentageMissingByCode < 0.3 & !is.na(ef_money),
      ef_money,
      ifelse(PercentageMissingByCode < 0.3, MedianByRegion, ef_money)
    )
  ) %>%
  select(-PercentageMissingByCode, -MedianByRegion)

Economic freedom: trade

6 countries have missing values, but the percentage of missing values is always below 25%.

Code
question1_missing_ef_trade <- data_question1 %>%
  group_by(code) %>%
  summarize(Na_ef_trade = mean(is.na(ef_trade)))%>% # Column % NAs
  filter(Na_ef_trade != 0)

We look at the evolution of the variable over time. For the countries where this evolution is linear, we fill in the missing values using linear interpolation.

Code
question1_missing_ef_trade <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
  filter(code %in% question1_missing_ef_trade$code)

Evol_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
  geom_line(aes(x = year, y = ef_trade, 
                 color = cut(PercentageMissing,
                             breaks = c(0, 0.1, 0.2, 0.3, 1),
                             labels = c("0-10%", "10-20%", "20-30%", "30-100%")))) +
  labs(title = "Evolution of economic freedom: trade over time", x = "Years from 2000 to 2022", y = "ef_trade") +
  scale_color_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%" = "red", "30-100%" = "black"),
                     labels = c("0-10%", "10-20%", "20-30%", "50-100%")) +
  guides(color = guide_legend(title = "% missings")) +
  facet_wrap(~ code, nrow = ) +
  scale_y_continuous(limits = c(0, 10))

print(Evol_Missing_ef_trade)

# Linear interpolation for "AZE", "BFA", "ETH", "GEO", "VNH"
list_code <- c("AZE", "GEO", "MKD", "MNG")
for (i in list_code) {
  country_data <- data_question1 %>% filter(code == i)
  interpolated_data <- na.interp(country_data$ef_trade)
  data_question1[data_question1$code == i, "ef_trade"] <- interpolated_data
}

Then, we look at the distribution of the variable per region. Seeing that the only region that still has missing values is a centered distribution, we decide to replace the missing values using the mean of the region.

Code
question1_missing_ef_trade <- data_question1 %>%
  group_by(code) %>%
  mutate(PercentageMissing = mean(is.na(ef_trade))) %>% # Column % NAs
  ungroup() %>%
  group_by(region) %>%
  filter(sum(PercentageMissing, na.rm = TRUE) > 0)

Freq_Missing_ef_trade <- ggplot(data = question1_missing_ef_trade) +
  geom_histogram(aes(x = ef_trade, 
                     fill = cut(PercentageMissing,
                                breaks = c(0, 0.1, 0.2, 0.3, 1),
                                labels = c("0-10%", "10-20%", "20-30%", "30-100%"))),
                 bins = 30) +
  labs(title = "Distribution of economic freedom: trade", x = "ef_trade", y = "Frequency") +
  scale_fill_manual(values = c("0-10%" = "blue", "10-20%" = "green", "20-30%"="red","30-100%" = "black"), labels = c("0-10%", "10-20%", "20-30%","30-100%")) +
  guides(fill = guide_legend(title = "% missings")) +
  facet_wrap(~ region, nrow = 2)

print(Freq_Missing_ef_trade)

data_question1 <- data_question1 %>%
  group_by(code) %>%
  mutate(
    PercentageMissingByCode = mean(is.na(ef_trade))
  ) %>%
  ungroup() %>% 
  group_by(region) %>%
  mutate(
    MeanByRegion = mean(ef_trade, na.rm = TRUE),
    ef_trade = ifelse(
      PercentageMissingByCode < 0.3 & !is.na(ef_trade),
      ef_trade,
      ifelse(PercentageMissingByCode < 0.3, MeanByRegion, ef_trade)
    )
  ) %>%
  select(-PercentageMissingByCode, -MeanByRegion)

Economic freedom: regulation

There are no more missing values, thanks to the previous steps.

SDGs 1 and 10

We noticed earlier that there were only missing values for goals 1 and 10. As we did before, we have started to investigate where are located the NAs in our dataset for first goal1, then goal 10.

Code
na_count <- sapply(data_question1, function(x) sum(is.na(x)))
na_count_df <- data.frame(variable = names(na_count), num_missing = na_count)
na_count_df_filtered <- subset(na_count_df, num_missing > 0)
ggplot(na_count_df_filtered, aes(x= num_missing,y=variable, fill = num_missing)) +
    geom_bar(aes(fill = num_missing), stat = "identity", width = 0.8, fill = 'lightblue') +
    geom_text(aes(label = num_missing), vjust = 0.5,hjust = 1.1, position = position_dodge(width = 0.9)) +
    theme_minimal() +
    theme(axis.text.y = element_text(hjust = 1, size=10 ), 
          legend.text = element_text(size = 8),
          legend.title = element_text(size = 10)) +
    labs(title = "Number of remaining missing values per variable ",
         x = "Number of NAs",
         y = "Variables")

# goal1
question1_missing_goal1 <- data_question1 %>%
  group_by(code) %>%
  summarize(Na_goal1 = mean(is.na(goal1)))%>%
  filter(Na_goal1 != 0)

data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal1$code)
# still 42 NA values goal10

We had found that the missing values were located in only 5 countries. So we have decided to get rid of them. At this stage, there were only 42 remaining missing values. Then, same step for goal 10.

Code
#goal10
question1_missing_goal10 <- data_question1 %>%
  group_by(code) %>%
  summarize(Na_goal10 = mean(is.na(goal10)))%>%
  filter(Na_goal10 != 0)

data_question1 <- data_question1 %>% filter(!code %in% question1_missing_goal10$code)

We have found the 2 lasts countries containing missing values. Now, our dataset is completely clean and ready to be used for our question 1.

Data for question 2 and 4

We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.

Code
see_missing24 <- data_question24 %>%
  group_by(code) %>%
  summarise(across(everything(), ~ sum(is.na(.))) %>%
              mutate(num_missing = rowSums(across(everything()))) %>%
              filter(num_missing > 0))
#> `summarise()` has grouped output by 'code'. You can override using
#> the `.groups` argument.

data_question24 <- data_question24 %>%
  group_by(country) %>%
  filter(!all(is.na(goal1)) & !all(is.na(goal10)))

Data for question 3

We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed. Since there are no other missing values, we stop here.

Disasters

We begin by visualizing the missing values.

Code
variable_names <- names(data_question3_1)
missing_percentages <- sapply(data_question3_1, function(col) mean(is.na(col)) * 100)

missing_data_summary <- data.frame(
  Variable = variable_names,
  Missing_Percentage = missing_percentages
)

missing_data_summary <- missing_data_summary %>%
  mutate(VariableGroup = ifelse(startsWith(Variable, "goal") & Missing_Percentage == 0, "Goals without NAs", as.character(Variable)))

ggplot(data = missing_data_summary, aes(x = reorder(VariableGroup, Missing_Percentage), y = Missing_Percentage, fill = Missing_Percentage)) +
  geom_bar(stat = "identity") +
  geom_text(aes(label = ifelse(Missing_Percentage > 1, sprintf("%.1f%%", Missing_Percentage), ""),
                y = Missing_Percentage),
            position = position_stack(vjust = 1),  # Adjust vertical position
            color = "white",  # Text color
            size = 3,          # Text size
            hjust = 1.05) +
  labs(title = "Percentage of Missing Values by Variable",
       x = "Variable",
       y = "Missing Percentage") +
  theme_minimal() +
  theme(axis.text.y = element_text(hjust = 1)) +
  coord_flip()

In this particular case, even if there are many missing values in our disaster dataset, we made the hypothesis that disaster events can not happen every year for every country given that these are uncontrollable and non-recurring events. Therefore the NAs that we encounter will become zeroes, implying that there have been no climatic disasters.

Code

data_question3_1[is.na(data_question3_1)] <- 0

COVID19

We look at the missing values for the three variables that are specific to COVID during the years of COVID: 2020 to 2022. We delete the countries that have NAs (only stringency has 6 countries with 100% NAs).

Code
COVID4 <- data_question3_2 %>%
  filter(year >= 2020 & year <= 2022) %>%
  group_by(code) %>%
  summarize(Na_deaths = mean(is.na(deaths_per_million)),
            Na_cases = mean(is.na(cases_per_million)),
            Na_stringency = mean(is.na(stringency))) %>%
  filter(Na_deaths != 0 | Na_cases!=0 |  Na_stringency !=0)

g1 <- ggplot(COVID4, aes(x = reorder(code, Na_deaths), y = Na_deaths)) +
  geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
  labs(title = "NAs by rows: deaths per million",
       x = "Code",
       y = "Proportion of Missing Values") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1))

g2 <- ggplot(COVID4, aes(x = reorder(code, Na_cases), y = Na_cases)) +
  geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
  labs(title = "NAs by rows: cases per million",
       x = "Code",
       y = "Proportion of Missing Values") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1))

g3 <- ggplot(COVID4, aes(x = reorder(code, Na_stringency), y = Na_stringency)) +
  geom_bar(stat = "identity", fill = "lightgreen", color = "black") +
  labs(title = "NAs by rows: stringency",
       x = "Code",
       y = "Proportion of Missing Values") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1))

(g1 + g2 + g3) / plot_spacer()

data_question3_2 <- data_question3_2 %>% filter(!code %in% COVID4$code)

We replace the NAs of the other COVID columns (years 2000 t0 2019) by 0 (because we don’t have real missing, only introduced by merging with the other databases).

Code
data_question3_2 <- data_question3_2 %>%
  mutate(
    cases_per_million = ifelse(is.na(cases_per_million), 0, cases_per_million),
    deaths_per_million = ifelse(is.na(deaths_per_million), 0, deaths_per_million),
    stringency = ifelse(is.na(stringency), 0, stringency)
  )

Conflicts

We create a column with the number of missing values by country over all the variables, except goal 1 and goal 10 that we already discussed.Two countries have missing values, we remove them (MNE and SRB).

Code
#### Removing countries because of missing values ####

see_missing3_3 <- data_question3_3 %>%
  group_by(code) %>%
  summarise(across(-c(goal1, goal10),  # Exclude columns "goal1" and "goal10"
                   ~ sum(is.na(.))) %>%
              mutate(num_missing = rowSums(across(everything()))) %>%
              filter(num_missing > 0))

data_question3_3 <- data_question3_3 %>% filter(!code %in% c("MNE","SRB","SSD"))
Code
#### EXPORT as CSV ####
write.csv(data_question1, file = here("scripts","data","data_question1.csv"))
write.csv(data_question24, file = here("scripts","data","data_question24.csv"))
write.csv(data_question3_1, file = here("scripts","data","data_question3_1.csv"))
write.csv(data_question3_2, file = here("scripts","data","data_question3_2.csv"))
write.csv(data_question3_3, file = here("scripts","data","data_question3_3.csv"))

3 EDA and Analysis of the data

3.1 Focus on the influence of the factors over the SDG scores

For this first part of our EDA, let’s try to check first the distribution of the variables selected for answering our 1st question.

Code

# Reshape the data from wide to long format for our sdg goals and our human freedom index scores
long_df_goal_distribution <- pivot_longer(Correlation_overall, cols = starts_with("goal"), names_to = "Goal", values_to = "Value")

long_df_hfi_distribution <- pivot_longer(Correlation_overall, cols = pf_law:ef_regulation, names_to = "Category", values_to = "Value")

ggplot(long_df_goal_distribution, aes(x = Value, y = Goal, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +
  theme(plot.title = element_text(hjust = 0.5), # Center the title
        plot.title.position = "plot") + 
  labs(x = 'Scores',
       y = 'Goals',
        title = 'SDG Goals Distribution')

As we can see, most of our goals have a left-skewed distribution, which shows that for most of the country concerned implemented good strategies for targeting the goals objectives. Some distribution have a longer distribution than other, which could be a proof of inequality in the investments made for implementing solutions. In another hand, we notice that the only right-skewed distribution is concerning the observations of goal 9, which is promoting infrastructures, innovation and inclusive and sustainable industrialization. Again, that could show means inequalities. Wealthier countries are able to invest more on these sustainable development goals.

Code

ggplot(long_df_hfi_distribution, aes(x = Value, y = Category, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +
  theme(plot.title = element_text(hjust = 0.5), # Center the title
        plot.title.position = "plot") + 
  labs(x = 'Scores',
    title = 'Human Freedom Index Scores Distribution')

The distribution of the Human Freedom Index Score follows the same trend. Most of the scores are left-skewed, which means that countries tend to have in general good scores. The only scores not folowing are pf_law and ef_legal, which tend to have lower scores in general. Legal system, for civilians and countries, is changing slowly because it has a lot of implications over the situation within a country/between countries and because of the divergence of opinions. Therefore, investing more money for raising these scores will take more time than raising the scores of other goals.

Now let’s consider the remaining variables of the dataset dedicated to answering the influence of factors over our SDG goals scores. All these variables have right-skewed distribution. Taking the mode into account, most of the concerned countries in our data have an unemployment rate between 2 to 7%, a distribution of GDP per capita between $3’000-$10’000, a distribution of military expenditure in percentage of the GDP 10% to 20% and finally a internet usage between 0 and 10%.

These variables shows us even more the inequalities between the countries in our dataset. While most of our countries have low internet usage or/and a low GDP per capita, few countries are more developed, then mostly wealthier, and thus having better chances to get higher scores.

Code
#now, same for the remaining variables. No need to reshape our data as only one variable.
unempl_d <- ggplot(Correlation_overall, aes(x = unemployment.rate, y = 1, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +
  theme(plot.title = element_text(hjust = 0.5, size = 10), # Center the title
        plot.title.position = "plot") + 
  labs(y = 'Density',
  title = 'Distribution of Unemployment Rate')

gdp_d <- ggplot(Correlation_overall, aes(x = GDPpercapita, y = 1, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +
  theme(plot.title = element_text(hjust = 0.5, size = 10), # Center the title
        plot.title.position = "plot") + 
  labs(y = 'Density', title = 'Distribution of GDP per Capita')

milit_d <- ggplot(Correlation_overall, aes(x = MilitaryExpenditurePercentGDP, y = 1, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +
  theme(plot.title = element_text(hjust = 0.5, size = 10), # Center the title
        plot.title.position = "plot") + 
  labs(y = 'Density',title = 'Distribution of Military Expenditure (% of GDP)')

internet_d <- ggplot(Correlation_overall, aes(x = internet_usage, y = 1, fill = stat(x))) +
  geom_density_ridges_gradient(scale = 3, size = 0.3, rel_min_height = 0.01) +
  scale_fill_viridis_c(name = "", option = "C") +theme(plot.title = element_text(hjust = 0.5, size = 10),
        plot.title.position = "plot") + 
  labs(y = 'Density',title = 'Distribution of Internet Usage')

grid.arrange(unempl_d,gdp_d,milit_d,internet_d, ncol = 2, nrow = 2)

Now, let’s display the distribution of the different SDG achievement scores per continent, using violin boxplots to have an overview of the mods, the range with most of the observations and the outliers.

Code
#### boxplots ####

#For sdg goals per continent 

#Africa
data_Q1_Africa <- data_question1 %>% #filtering Africa as continent
  filter(data_question1$continent == 'Africa') %>%
  dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17)

data_Q1_Africa_long <- melt(data_Q1_Africa)

medians_AF <- data_Q1_Africa_long %>% #median per variable
  group_by(variable) %>%
  summarize(median_value = median(value))
data_Q1_Africa_long <- data_Q1_Africa_long %>%
                  left_join(medians_AF, by = "variable")
#America
data_Q1_Americas <- data_question1 %>%#filtering Americas as continent
  filter(data_question1$continent == 'Americas') %>%
  dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17)

data_Q1_Americas_long <- melt(data_Q1_Americas)

medians_AM <- data_Q1_Americas_long %>% #median per variable
  group_by(variable) %>%
  summarize(median_value = median(value))
data_Q1_Americas_long <- data_Q1_Americas_long %>%
                  left_join(medians_AM, by = "variable")
#Asia
data_Q1_Asia <- data_question1 %>%
  filter(data_question1$continent == 'Asia') %>%#filtering Asia as continent
  dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17)

data_Q1_Asia_long <- melt(data_Q1_Asia)

medians_AS <- data_Q1_Asia_long %>% #median per variable
  group_by(variable) %>%
  summarize(median_value = median(value))
data_Q1_Asia_long <- data_Q1_Asia_long %>%
                  left_join(medians_AS, by = "variable")
#Europe
data_Q1_Europe <- data_question1 %>%
  filter(data_question1$continent == 'Europe') %>% #filtering Europe as continent
  dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17)

data_Q1_Europe_long <- melt(data_Q1_Europe)

medians_EU <- data_Q1_Europe_long %>% #median per variable
  group_by(variable) %>%
  summarize(median_value = median(value))
data_Q1_Europe_long <- data_Q1_Europe_long %>%
                  left_join(medians_EU, by = "variable")
#Oceania
data_Q1_Oceania <- data_question1 %>%
  filter(data_question1$continent == 'Oceania') %>% #filtering Oceania as continent
  dplyr::select(continent, overallscore, goal1, goal2, goal3, goal4, goal5, goal6, goal7, goal8, goal9, goal10, goal11, goal12, goal13, goal15, goal16, goal17)

data_Q1_Oceania_long <- melt(data_Q1_Oceania)

medians_OC <- data_Q1_Oceania_long %>% #median per variable
  group_by(variable) %>%
  summarize(median_value = median(value))
data_Q1_Oceania_long <- data_Q1_Oceania_long %>%
                  left_join(medians_OC, by = "variable")
# merge all medians
medians_all <- rbind(data_Q1_Oceania_long, data_Q1_Americas_long,data_Q1_Africa_long,data_Q1_Asia_long,data_Q1_Europe_long)

medians_all$color <- ifelse(medians_all$median_value > 75, "lightgreen",
                        ifelse(medians_all$median_value < 25, "red3", 'lightblue3')) #assigning colors. If median for a goal is > 75 -> lightblue, if < 25 -> red, orange otherwise.

bandwidth_nrd <- bw.nrd(medians_all$value) #adapting the bandwidth

ggplot(medians_all, aes(x = variable, y = value, fill = color)) +
  geom_violin(trim = FALSE, bw = bandwidth_nrd) +
  scale_fill_manual(values = c("lightgreen" = "lightgreen", "red3" = "red3", "lightblue3" = "lightblue3"),
                    labels = c("between", ">75", "<25")) + 
  labs(title = "SDG Goals Distribution by Continent", x = "Goals", y = "Scores", fill = "Score Category") +
  facet_grid(continent ~ ., scales = "free_y") +
  scale_y_continuous(labels = scales::label_number()) +
  theme_classic() +
  theme(plot.title = element_text(hjust = 0.5), # Center the title
        plot.title.position = "plot", axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
    

Here is the distribution of the goals per continent. We notice that Europe the continent with most of its goals having a median superior to 75 (represented by the lightblue color. We notice that only two goals have a median score lower than 25, which is for goal 9 for Africa and goal 10 for America. As seen before, goal 9 is generally having lower scores than the other goals. That could mean that the access to technology and sustainable/resilient infrastructures/industrialization is harder in Africa, because of various reasons such as less wealthy countries, corruption,…

The goal 10 concerns the reduction of inequalities within/amongst countries. Therefore, we presume that less effort and investment has been made on this goal in America.

In addition, some distributions are quite disparsed, such as goal 13 in Oceania and goal 10 in Africa. That could again show inequalities within countries or less investment made to raise the scores by different countries of the same continent.

Now let’s display boxplots for the different variables of the human freedom index.

Code
#for Human Freedom Index scores 

#Africa
data_Q1_Africa_HFI <- data_question1 %>%
  filter(data_question1$continent == 'Africa') %>%
  dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

data_Q1_Africa_HFI_long <- melt(data_Q1_Africa_HFI)

medians_AF_HFI <- data_Q1_Africa_HFI_long %>%
  group_by(variable) %>%
  summarize(median_value = median(value))

data_Q1_Africa_HFI_long <- data_Q1_Africa_HFI_long %>%
                  left_join(medians_AF_HFI, by = "variable")

#America
data_Q1_Americas_HFI <- data_question1 %>%
  filter(data_question1$continent == 'Americas') %>%
  dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

data_Q1_Americas_HFI_long <- melt(data_Q1_Americas_HFI)

medians_AM_HFI <- data_Q1_Americas_HFI_long %>%
  group_by(variable) %>%
  summarize(median_value = median(value))

data_Q1_Americas_HFI_long <- data_Q1_Americas_HFI_long %>%
                  left_join(medians_AM_HFI, by = "variable")

#Asia
data_Q1_Asia_HFI <- data_question1 %>%
  filter(data_question1$continent == 'Asia') %>%
  dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

data_Q1_Asia_HFI_long <- melt(data_Q1_Asia_HFI)

medians_AS_HFI <- data_Q1_Asia_HFI_long %>%
  group_by(variable) %>%
  summarize(median_value = median(value))

data_Q1_Asia_HFI_long <- data_Q1_Asia_HFI_long %>%
                  left_join(medians_AS_HFI, by = "variable")

#Europe
data_Q1_Europe_HFI <- data_question1 %>%
  filter(data_question1$continent == 'Europe') %>%
  dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

data_Q1_Europe_HFI_long <- melt(data_Q1_Europe_HFI)

medians_EU_HFI <- data_Q1_Europe_HFI_long %>%
  group_by(variable) %>%
  summarize(median_value = median(value))

data_Q1_Europe_HFI_long <- data_Q1_Europe_HFI_long %>%
                  left_join(medians_EU_HFI, by = "variable")

#Oceania 
data_Q1_Oceania_HFI <- data_question1 %>%
  filter(data_question1$continent == 'Oceania') %>%
  dplyr::select(continent, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

data_Q1_Oceania_HFI_long <- melt(data_Q1_Oceania_HFI)

medians_OC_HFI <- data_Q1_Oceania_HFI_long %>%
  group_by(variable) %>%
  summarize(median_value = median(value))

data_Q1_Oceania_HFI_long <- data_Q1_Oceania_HFI_long %>%
                  left_join(medians_OC_HFI, by = "variable")

# merge all medians 
medians_all_HFI <- rbind(data_Q1_Oceania_HFI_long, data_Q1_Americas_HFI_long,data_Q1_Africa_HFI_long,data_Q1_Asia_HFI_long,data_Q1_Europe_HFI_long)

medians_all_HFI$color <- ifelse(medians_all_HFI$median_value > 7.5, "lightgreen", 
                        ifelse(medians_all_HFI$median_value < 2.5, "red3", 'lightblue3'))

bandwidth_nrd_HFI <- bw.nrd(medians_all_HFI$value)

# Create the plot
ggplot(medians_all_HFI, aes(x = variable, y = value, fill = color)) +
  geom_violin(trim = FALSE, bw = bandwidth_nrd_HFI) +
  scale_fill_manual(values = c("lightgreen" = "lightgreen", "red3" = "red3", "lightblue3" = "lightblue3"),
                    labels = c("between", ">7.5", "<2.5")) + 
  labs(title = "Human Freedom Index Scores Distribution by Continent", x = "Variables", y = "Scores", fill = "Score Category") +
  facet_grid(continent ~ ., scales = "free_y") +
  scale_y_continuous(labels = scales::label_number()) +
  theme_classic() +
  theme(plot.title = element_text(hjust = 0.5), # Center the title
        plot.title.position = "plot", axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))

Here we can notice the same results as before concerning the SDG goals, except that no score has a median below 25%. Again, Europe is the continent with most of its median scores superior to 75 (lightblue color)

For space reason because of the different scales, we have decided not to make violin boxplot per continent for the remaining variables. The distribution can be seen in the general distribution seen prior to that.

Now, let’s have a closer look to the general correlation between our variables. Using our cleaned dataset, we will use a correlation heatmap to help us vizualising the informations. Given that most of our variables are not normally distributed, we will use the Spearman method to calculate the correlation.

Code
#### Correlations between variables Heatmap ####

Correlation_overall <-data_question1 %>% # selection of the numerical data
      dplyr::select(population:ef_regulation)

cor_matrix_sper <- # calculation of the correlation matrix
  cor(Correlation_overall, method = "spearman", use = "everything")

cor_melted <- # wide to long transformation
  melt(cor_matrix_sper)

ggplot(data = cor_melted, aes(Var1, Var2, fill = value)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white", 
                       midpoint = 0, limit = c(-1, 1), space = "Lab", 
                       name="Spearman\nCorrelation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
        axis.text.y = element_text(size = 8),
        plot.title = element_text(hjust = 0.5)) +
  coord_fixed() +
  labs(x = '', y = '', title = 'Correlation Matrix Heatmap')

By looking at our heatmap, we notice that most of our goals are strongely correlated together and that some variables amongst the Human Freedom Index scores too (strong correlation among personal freedom variables (pf), reflecting scores from the Human Freedom Index on movement, religion, assembly, and expression). This could be explained by the fact that some of these goals/scores share partially similar objectifs, which could mean that a raise in the score of one of these goals will raise positively the score of another/some other goals. In addition, we notice that goals 12 and 13 (respectively “responsible consumption & production” and “climate action”) are strongely negatively correlated with most of our variables, except between themself.

We will see more in detail the correlations between our goals and variables in the analysis part of the influence of the factors over the Sustainable Development Goals.

In order to have an overview of the relationship between our independent variables and the SDG overall score, we make several graphs containing the Spearman correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable. –>

Code
#### Spearman's correlation coeff ####

lower.panel <- function(x, y, ...){
   points(x, y, pch = 20, col = "darkgreen", cex = 0.2)
}
 
 panel.hist <- function(x, ...){
   usr <- par("usr"); on.exit(par(usr))
   par(usr = c(usr[1:2], 0, 1.5) )
   h <- hist(x, plot = FALSE)
   breaks <- h$breaks; nB <- length(breaks)
   y <- h$counts; y <- y/max(y)
   rect(breaks[-nB], 0, breaks[-1], y, col = "lightgreen", ...)
 }
 
 # panel.cor_stars function with stars alongside correlation coefficients
 panel.cor_stars <- function(x, y, digits = 2, prefix = "", cex.cor, ...) {
   usr <- par("usr"); on.exit(par(usr))
   par(usr = c(0, 1, 0, 1))
   r <- cor(x, y)
   p_value <- cor.test (x,y)$p.value
 
   if (p_value < 0.001){
     stars <- "***"
   } else if (p_value < 0.01) {
     stars <- "**"
   } else if (p_value < 0.05) {
     stars <- "*"
   } else {
     stars <- ""
   }
   txt <- paste0(format(c(r, 0.123456789), digits = digits)[1], " ", stars)
   if(missing(cex.cor)) cex.cor <- 0.5/strwidth(txt)
   text(0.5, 0.5, txt, cex = cex.cor)
 }
 
 # # Independent variables
 pairs(Correlation_overall[,c("overallscore", "unemployment.rate", "GDPpercapita", "MilitaryExpenditurePercentGDP", "internet_usage")], upper.panel=panel.cor_stars, diag.panel=panel.hist, lower.panel = lower.panel, main="Correlation table and distribution of various variables")
 
 # pairs(Correlation_overall[,c("overallscore", "pf_law", "pf_security", "pf_movement", "pf_religion", "pf_assembly" ,"pf_expression" ,"pf_identity", "ef_government", "ef_legal", "ef_money", "ef_trade", "ef_regulation")], upper.panel=panel.cor_stars, diag.panel=panel.hist, lower.panel = lower.panel, main="Correlation table and distribution of HFI variables")

Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.

The overall SDG achievement score is highly correlated with the percentage of people using the internet (r=.79) and GDP per capita (r=.60). The unemployement rate as well as the military expenditures in percentage of GDP per capita do not seem to play a role. However, this is only for the overall score.

The overall SDG achievement score is highly correlated with “personal freedom: law” (p=.69) and “personal freedom: identity” (p=.62). The other dimensions of personal freedom do not seem to have important influence. Regarding the distribution of the personal freedom variables, we notice that except for law, all have right-skewed distributions meaning that most of the countries have high scores.

The overall SDG achievement score is highly correlated with “economical freedom: legal” (p=.77), “economical trade: legal” (p=.67) and “economical freedom: money” (p=.6), while the other dimensions of economic freedom do not seem to have important influence. Regarding the distribution of the economic freedom variables, we notice more heterogeneous distributions and scores across the various countries than for personal freedom.

3.2 Focus on the relationships among the SDGs

How are the different SDGs linked? (We want to see if some SDGs are linked in the fact that a high score on one implies a high score on the other, and thus if we can make groups of SDGs that are comparable in that way).

3.2.1 EDA: General visualization of the SDGs

We want now to explore and analyse how the SDGs scores are linked together. We first, interest ourself to the correlation between the goals scores. To do that we chose to use a correlation heatmap. We set an arbitrary threashold to better concentrate our attention to the most corrolated goals. We fixed our threashold at 0.5 (indicating a strong positive relationship) and less than -0.5 (signifying a strong negative relationship).

Given that, as seen previously, our variables do not follow a normal distribution, employing the Pearson correlation method is not suitable in our analysis since it requires observations to be normaly distributed. We attempted to normalize the data through logarithmic or square root transformations, but these adjustments were insufficiently effective. Consequently, we will resort to computing the Spearman correlation. While not ideal, this method does not necessitate the normal distribution of our data. In our analysis, particularly for the heatmap visualization, we will focus on correlations that exceed the threshold of 0.5 or fall below -0.5. This selective approach will enhance the readability and interpretability of the heatmap.

To do that, we select only the colums of interest and compute the correlation matrix using Spearman correlation. We then melt the matrix to be able to plot it. We then plot the heatmap using ggplot2.

Code
#### Preparation of the data ####

# Keeping only the columns of interest for the correlation calcluation
data_4_goals <- data_4 %>%
  dplyr::select(overallscore, goal1, goal2, goal3, goal4, goal5,
                goal6,goal7, goal8, goal9, goal10, goal11, goal12,
                goal13, goal15, goal16, goal17)
Code
#### Spearman Correlation ####

# Calculate Spearman correlation
spearman_corr_4 <-cor(data_4_goals, method = "spearman", use = "everything")

# Apply threshold and replace values below it with NA
spearman_corr_4[abs(spearman_corr_4) < threashold_heatmap] <- NA
Code
#### Spearman Correlation Heatmap ####

# Melting the data
melted_corr_4 <- melt(spearman_corr_4, na.rm = TRUE)

# Creation of the heatmap
ggplot(data = melted_corr_4, aes(x = Var1, y = Var2, fill = value)) +
    geom_tile() +
    geom_text(aes(label = sprintf("%.2f", value)), vjust = 0.5, size=2.5) + # Adding text
    scale_fill_gradient2(low = "blue", high = "red", mid = "white", 
                         midpoint = 0, limit = c(-1,1), space = "Lab", 
                         name="Spearman\nCorrelation",
                         na.value = "grey") +
    theme_minimal() +
    theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
    labs(title = "Heatmap of Spearman Correlations for Goals", 
         x = "", y = "")

The correaltion can be read on the graph. The darker the color, the stronger the correlation. If there is not colors, it means that the gaols correlation does not exceed our threashold of ±0.5.

It is evident that the Sustainable Development Goals (SDGs) are intricately interconnected. However, certain goals appear to be less interrelated compared to others. Specifically, SDG 1 (No Poverty) and SDG 10 (Reduced Inequalities) demonstrate a weaker correlation with the rest of the goals. Similarly, Goal 15 (Life on Land) also exhibits a lesser degree of interconnection with the other SDGs. It is also interesting to note that some goals are negatively correlated with others. For instance, based on the Spearman correlation, goal 12 (Responsable Consumption and Production) and goal 13 (Climate Action) are negatively correlated with the others goals. This suggest that when the higher a goal other than goal 12 or 13 is, the lower the goals 12 and 13 are. Given their similar nature, it is not surprising that they are highly correlated with each other.

3.2.2 Analysis: Factor analysis and Stepwise regression applied to the SDGs

At this point, we saw that the goals were mostly correlated. We now want to see if we can group them in a smaller number of factors. To do that, we will use a principal component analysis (PCA). We will first look at the scree plot to see how many factors we should keep. We will then look at the biplot to see how the goals are grouped together.

Code
#### Scree Plot ####

# Selecting only the goals columns
goals_data <- data_4 %>%
  dplyr::select(goal1, goal2, goal3, goal4, goal5,
                goal6,goal7, goal8, goal9, goal10, goal11, goal12,
                goal13, goal15, goal16, goal17)
goals_data_scaled <- scale(goals_data) # Scaling the data
pca_result <- prcomp(goals_data_scaled) # Running PCA

# Plotting Scree plot to visualize the importance of each principal component
fviz_eig(pca_result,
         addlabels = TRUE,
         col.var="dodgerblue3") +
  theme_minimal()

eigenvalues <- pca_result$sdev^2 # getting the eigenvalues

We see clearly that the first component is the most important one. Guided by the Kaiser criterion, which advises retaining only those components with eigenvalues exceeding 1, the initial three components emerge as candidates. However, considering the third component’s eigenvalue of 1.016, we opted for simplification by focusing exclusively on the first two components. This decision also enhances clarity in the biplot representation, as it reduces the dimensions to just two, making interpretation more straightforward.

Code
#### Biplot ####

# Plotting Biplot to visualize the two main dimensions
fviz_pca_biplot(pca_result,
                label="var",
                col.var="dodgerblue3",
                geom="point",
                pointsize = 0.1,
                labelsize = 4) +
  theme_minimal()

The biplot offers insightful visualization, clearly illustrating the relationship between the various goals and the first two components. Notably, Dimension 2 exhibits a strong correlation with Goals 10 (Reduced inequalities) and 15 (Life on Land), whereas the remaining goals show a moderate to high correlation with Dimension 1. Furthermore, an interesting pattern emerges, revealing three distinct groups of variables, each playing a unique role. One group comprises Goals 12 (Responsible Consumption and Production) and 13 (Climate Action), another encompasses Goals 10 (Reduced inequalities) and 15 (Life on Land), and the third group includes all other variables. This categorization aids in understanding the distinct influences and interactions among the goals.

Grouping Goal 12 (Responsible Consumption and Production) and Goal 13 (Climate Action) together is logical, as both pertain to environmental issues. It is, however, interesting to note the pairing of Goal 10 (Reduced Inequalities) with Goal 15 (Life on Land). This could be explained by the fact that Goal 10 (Reduced inequalities) is related to the reduction of inequalities within and among countries, while Goal 15 (Life on Land) is related to the protection, restoration and promotion of sustainable use of terrestrial ecosystems, sustainable manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss. Therefore, it is possible that the respondents who are more concerned about the reduction of inequalities are also more concerned about the protection of the environment. But this is a stretched.

///////////////////////////////////////////////////////////////////////// ////////////////////////////// WIP ///////////////////////////////// /////////////////////////////////////////////////////////////////////////

Code
goals_data <- data_4 %>%
  dplyr::select(overallscore, goal1, goal2, goal3, goal4, goal5,
                goal6,goal7, goal8, goal9, goal10, goal11, goal12,
                goal13, goal15, goal16, goal17)
Code
lm_o_n <- lm(overallscore ~ 1, data = goals_data)
lm_o_f <- lm(overallscore ~ goal1 + goal2 + goal3 + goal4 + goal5
                      + goal6 + goal7 + goal8 + goal9 + goal10 + goal11
                      + goal12 + goal13 + goal15 + goal16 + goal17,
             data = goals_data)
step_o <- step(lm_o_n, scope = list(lower = lm_o_n, upper = lm_o_f))
#> Start:  AIC=16177
#> overallscore ~ 1
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal3   1    374063  51968  9162
#> + goal7   1    337079  88952 10955
#> + goal1   1    334843  91187 11038
#> + goal6   1    332271  93760 11130
#> + goal4   1    324051 101980 11411
#> + goal11  1    318638 107393 11583
#> + goal9   1    301672 124358 12072
#> + goal16  1    265279 160752 12928
#> + goal12  1    243137 182894 13359
#> + goal8   1    233128 192903 13536
#> + goal5   1    212825 213206 13870
#> + goal2   1    203256 222775 14017
#> + goal13  1    166142 259889 14531
#> + goal17  1    151745 274286 14710
#> + goal10  1    114345 311686 15137
#> + goal15  1     25418 400613 15974
#> <none>                426031 16177
#> 
#> Step:  AIC=9162
#> overallscore ~ goal3
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal6   1     13969  37998  8120
#> + goal7   1     10675  41293  8398
#> + goal2   1     10558  41410  8407
#> + goal5   1     10064  41904  8447
#> + goal4   1      9984  41984  8453
#> + goal15  1      9816  42152  8466
#> + goal9   1      7828  44139  8620
#> + goal11  1      7762  44206  8625
#> + goal8   1      7457  44511  8648
#> + goal17  1      6369  45599  8728
#> + goal10  1      5728  46240  8775
#> + goal16  1      5709  46259  8776
#> + goal1   1      5509  46459  8791
#> + goal12  1      1100  50868  9093
#> + goal13  1       174  51794  9153
#> <none>                 51968  9162
#> - goal3   1    374063 426031 16177
#> 
#> Step:  AIC=8120
#> overallscore ~ goal3 + goal6
#> 
#>          Df Sum of Sq   RSS   AIC
#> + goal10  1      6566 31432  7490
#> + goal7   1      6491 31508  7498
#> + goal4   1      6383 31615  7509
#> + goal15  1      4880 33118  7664
#> + goal2   1      4855 33144  7666
#> + goal5   1      4535 33464  7698
#> + goal17  1      3904 34094  7761
#> + goal11  1      3780 34219  7773
#> + goal16  1      3585 34413  7792
#> + goal9   1      3228 34770  7826
#> + goal1   1      3067 34932  7842
#> + goal8   1      2527 35472  7893
#> + goal13  1        67 37932  8116
#> + goal12  1        45 37954  8118
#> <none>                37998  8120
#> - goal6   1     13969 51968  9162
#> - goal3   1     55762 93760 11130
#> 
#> Step:  AIC=7490
#> overallscore ~ goal3 + goal6 + goal10
#> 
#>          Df Sum of Sq   RSS   AIC
#> + goal4   1      8227 23205  6480
#> + goal7   1      7315 24117  6608
#> + goal5   1      6869 24563  6669
#> + goal11  1      5784 25648  6813
#> + goal17  1      5446 25986  6857
#> + goal2   1      4737 26695  6947
#> + goal15  1      3001 28431  7157
#> + goal1   1      2154 29278  7255
#> + goal16  1      1447 29985  7334
#> + goal9   1      1419 30013  7338
#> + goal8   1      1361 30071  7344
#> + goal13  1      1061 30371  7377
#> + goal12  1       324 31108  7457
#> <none>                31432  7490
#> - goal10  1      6566 37998  8120
#> - goal6   1     14808 46240  8775
#> - goal3   1     40764 72196 10261
#> 
#> Step:  AIC=6480
#> overallscore ~ goal3 + goal6 + goal10 + goal4
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal17  1      5305 17900 5616
#> + goal7   1      3973 19232 5855
#> + goal15  1      3739 19466 5896
#> + goal5   1      3538 19667 5930
#> + goal2   1      3326 19879 5966
#> + goal11  1      3242 19962 5980
#> + goal16  1      2361 20844 6124
#> + goal9   1      1400 21805 6274
#> + goal13  1      1211 21994 6303
#> + goal8   1      1028 22177 6330
#> + goal1   1       702 22503 6379
#> + goal12  1       406 22799 6423
#> <none>                23205 6480
#> - goal4   1      8227 31432 7490
#> - goal10  1      8410 31615 7509
#> - goal6   1     10770 33975 7749
#> - goal3   1     12033 35238 7871
#> 
#> Step:  AIC=5616
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal2   1      3704 14196 4845
#> + goal7   1      2992 14908 5008
#> + goal15  1      2891 15009 5030
#> + goal11  1      2068 15831 5208
#> + goal5   1      2060 15840 5210
#> + goal8   1      1723 16177 5280
#> + goal13  1      1287 16613 5369
#> + goal16  1      1031 16868 5420
#> + goal9   1       920 16980 5442
#> + goal12  1       414 17485 5540
#> + goal1   1       335 17564 5555
#> <none>                17900 5616
#> - goal17  1      5305 23205 6480
#> - goal4   1      8087 25986 6857
#> - goal6   1      8501 26401 6910
#> - goal3   1      8842 26741 6953
#> - goal10  1     10072 27971 7103
#> 
#> Step:  AIC=4845
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal7   1      3109 11087 4022
#> + goal15  1      2597 11599 4173
#> + goal11  1      2072 12124 4321
#> + goal5   1      1323 12872 4520
#> + goal16  1       918 13278 4624
#> + goal13  1       893 13303 4630
#> + goal1   1       716 13479 4674
#> + goal8   1       655 13540 4689
#> + goal9   1       401 13794 4751
#> + goal12  1       108 14088 4821
#> <none>                14196 4845
#> - goal2   1      3704 17900 5616
#> - goal6   1      4986 19182 5847
#> - goal17  1      5683 19879 5966
#> - goal4   1      6616 20811 6118
#> - goal3   1      8183 22379 6361
#> - goal10  1      9814 24009 6595
#> 
#> Step:  AIC=4022
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal15  1      2888  8199 3018
#> + goal5   1      1308  9779 3606
#> + goal11  1      1083 10004 3681
#> + goal8   1      1033 10054 3698
#> + goal16  1       996 10091 3710
#> + goal9   1       743 10344 3793
#> + goal13  1       461 10626 3883
#> + goal1   1       231 10856 3954
#> + goal12  1        35 11052 4014
#> <none>                11087 4022
#> - goal7   1      3109 14196 4845
#> - goal3   1      3550 14637 4947
#> - goal2   1      3821 14908 5008
#> - goal6   1      3830 14917 5010
#> - goal4   1      3949 15036 5036
#> - goal17  1      4653 15740 5189
#> - goal10  1      9921 21008 6152
#> 
#> Step:  AIC=3018
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal11  1      1230  6969 2478
#> + goal8   1       642  7557 2748
#> + goal5   1       636  7563 2751
#> + goal16  1       625  7574 2755
#> + goal13  1       616  7583 2760
#> + goal9   1       600  7599 2766
#> + goal1   1       357  7842 2871
#> + goal12  1       244  7956 2919
#> <none>                 8199 3018
#> - goal6   1      2125 10325 3785
#> - goal15  1      2888 11087 4022
#> - goal7   1      3400 11599 4173
#> - goal2   1      3512 11711 4205
#> - goal17  1      3811 12011 4289
#> - goal4   1      4347 12546 4435
#> - goal3   1      4380 12579 4443
#> - goal10  1      7516 15715 5186
#> 
#> Step:  AIC=2478
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal13  1       881  6088 2029
#> + goal9   1       529  6440 2217
#> + goal8   1       497  6472 2233
#> + goal12  1       490  6480 2237
#> + goal1   1       483  6487 2241
#> + goal5   1       403  6566 2281
#> + goal16  1       229  6740 2368
#> <none>                 6969 2478
#> - goal11  1      1230  8199 3018
#> - goal6   1      1641  8611 3181
#> - goal7   1      2312  9282 3432
#> - goal3   1      2907  9876 3639
#> - goal15  1      3034 10004 3681
#> - goal17  1      3139 10108 3716
#> - goal2   1      3489 10458 3830
#> - goal4   1      3552 10522 3850
#> - goal10  1      8115 15085 5051
#> 
#> Step:  AIC=2029
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal9   1      1579  4509 1030
#> + goal8   1       837  5251 1538
#> + goal5   1       785  5303 1571
#> + goal16  1       468  5620 1764
#> + goal1   1       461  5627 1769
#> <none>                 6088 2029
#> + goal12  1         0  6088 2031
#> - goal13  1       881  6969 2478
#> - goal11  1      1495  7583 2760
#> - goal7   1      1760  7848 2874
#> - goal6   1      2064  8152 3001
#> - goal2   1      3077  9165 3391
#> - goal17  1      3149  9237 3417
#> - goal15  1      3243  9331 3451
#> - goal3   1      3423  9512 3515
#> - goal4   1      3761  9849 3632
#> - goal10  1      8990 15078 5052
#> 
#> Step:  AIC=1030
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal1   1       863  3646  324
#> + goal8   1       425  4084  702
#> + goal5   1       397  4112  725
#> + goal16  1       294  4215  807
#> + goal12  1       135  4374  930
#> <none>                 4509 1030
#> - goal6   1      1498  6008 1985
#> - goal11  1      1542  6051 2009
#> - goal9   1      1579  6088 2029
#> - goal13  1      1931  6440 2217
#> - goal7   1      1947  6457 2225
#> - goal2   1      1978  6488 2241
#> - goal3   1      2035  6545 2270
#> - goal17  1      2595  7105 2544
#> - goal15  1      3118  7627 2781
#> - goal4   1      3848  8357 3086
#> - goal10  1      7840 12349 4388
#> 
#> Step:  AIC=324
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9 + goal1
#> 
#>          Df Sum of Sq   RSS  AIC
#> + goal5   1      1138  2508 -922
#> + goal8   1       501  3145 -167
#> + goal16  1       332  3315    8
#> + goal12  1       132  3514  203
#> <none>                 3646  324
#> - goal3   1       778  4424  966
#> - goal1   1       863  4509 1030
#> - goal6   1      1083  4729 1189
#> - goal7   1      1361  5007 1380
#> - goal11  1      1735  5381 1620
#> - goal9   1      1980  5627 1769
#> - goal13  1      2098  5744 1837
#> - goal17  1      2189  5836 1890
#> - goal2   1      2196  5843 1894
#> - goal4   1      3051  6698 2349
#> - goal15  1      3326  6972 2484
#> - goal10  1      6596 10242 3766
#> 
#> Step:  AIC=-922
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9 + goal1 + goal5
#> 
#>          Df Sum of Sq  RSS   AIC
#> + goal8   1       391 2118 -1485
#> + goal16  1       362 2147 -1439
#> + goal12  1       164 2345 -1145
#> <none>                2508  -922
#> - goal3   1       701 3210  -102
#> - goal6   1       848 3356    47
#> - goal7   1      1054 3562   246
#> - goal5   1      1138 3646   324
#> - goal11  1      1431 3940   582
#> - goal17  1      1475 3984   619
#> - goal9   1      1498 4006   638
#> - goal1   1      1604 4112   725
#> - goal4   1      1631 4139   746
#> - goal2   1      1863 4371   928
#> - goal15  1      2465 4973  1359
#> - goal13  1      2583 5092  1437
#> - goal10  1      7421 9930  3665
#> 
#> Step:  AIC=-1485
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9 + goal1 + goal5 + 
#>     goal8
#> 
#>          Df Sum of Sq  RSS   AIC
#> + goal16  1       357 1761 -2098
#> + goal12  1       216 1902 -1841
#> <none>                2118 -1485
#> - goal8   1       391 2508  -922
#> - goal3   1       621 2739  -629
#> - goal6   1       650 2768  -594
#> - goal5   1      1028 3145  -167
#> - goal7   1      1146 3264   -44
#> - goal9   1      1163 3281   -27
#> - goal11  1      1347 3464   155
#> - goal2   1      1388 3505   194
#> - goal4   1      1601 3719   391
#> - goal1   1      1641 3759   427
#> - goal17  1      1697 3814   476
#> - goal15  1      2252 4370   929
#> - goal13  1      2734 4851  1278
#> - goal10  1      7117 9235  3425
#> 
#> Step:  AIC=-2098
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9 + goal1 + goal5 + 
#>     goal8 + goal16
#> 
#>          Df Sum of Sq  RSS   AIC
#> + goal12  1       524 1236 -3276
#> <none>                1761 -2098
#> - goal16  1       357 2118 -1485
#> - goal8   1       386 2147 -1439
#> - goal3   1       422 2183 -1384
#> - goal6   1       646 2407 -1058
#> - goal11  1       895 2656  -729
#> - goal9   1      1017 2778  -580
#> - goal5   1      1057 2817  -532
#> - goal7   1      1193 2954  -375
#> - goal17  1      1366 3127  -185
#> - goal2   1      1367 3127  -185
#> - goal1   1      1703 3464   156
#> - goal4   1      1790 3551   239
#> - goal15  1      1991 3752   423
#> - goal13  1      2961 4721  1189
#> - goal10  1      5963 7723  2831
#> 
#> Step:  AIC=-3276
#> overallscore ~ goal3 + goal6 + goal10 + goal4 + goal17 + goal2 + 
#>     goal7 + goal15 + goal11 + goal13 + goal9 + goal1 + goal5 + 
#>     goal8 + goal16 + goal12
#> 
#>          Df Sum of Sq  RSS   AIC
#> <none>                1236 -3276
#> - goal3   1       463 1700 -2216
#> - goal8   1       473 1709 -2197
#> - goal12  1       524 1761 -2098
#> - goal13  1       589 1826 -1977
#> - goal16  1       666 1902 -1841
#> - goal6   1       702 1938 -1779
#> - goal2   1       881 2117 -1483
#> - goal11  1       916 2152 -1429
#> - goal17  1      1067 2303 -1203
#> - goal5   1      1119 2356 -1128
#> - goal7   1      1303 2539  -877
#> - goal9   1      1325 2561  -848
#> - goal1   1      1758 2994  -328
#> - goal4   1      1883 3119  -191
#> - goal15  1      2261 3498   191
#> - goal10  1      5881 7117  2560
leaps_o <- regsubsets(overallscore ~ goal1 + goal2 + goal3 + goal4 + goal5
                      + goal6 + goal7 + goal8 + goal9 + goal10 + goal11
                      + goal12 + goal13 + goal15 + goal16 + goal17,
                      data=goals_data, nbest=16, method="backward")
plot(leaps_o,scale="adjr2") + theme_minimal()
#> NULL
summary(leaps_o)$adjr2
#>  [1] 0.786 0.891 0.858 0.919 0.919 0.917 0.916 0.893 0.941 0.934
#> [11] 0.933 0.932 0.928 0.922 0.950 0.949 0.948 0.942 0.935 0.934
#> [21] 0.963 0.963 0.962 0.952 0.948 0.944 0.971 0.970 0.967 0.956
#> [31] 0.953 0.950 0.977 0.976 0.973 0.972 0.958 0.956

Code


lm_1_n <- lm(goal1 ~ 1, data = goals_data)
lm_1_f <- lm(goal1 ~ goal2 + goal3 + goal4 + goal5 + goal6 + goal7 + goal8 + goal9 + goal10 + goal11 + goal12 + goal13 + goal15 + goal16 + goal17, data = goals_data)
step_1 <- step(lm_1_n, scope = list(lower = lm_1_n, upper = lm_1_f), direction = "forward")
#> Start:  AIC=23185
#> goal1 ~ 1
#> 
#>          Df Sum of Sq     RSS   AIC
#> + goal3   1   2766073  717859 17919
#> + goal7   1   2435787 1048145 19181
#> + goal4   1   2274604 1209328 19658
#> + goal6   1   2156293 1327639 19970
#> + goal11  1   2063862 1420070 20194
#> + goal9   1   1692954 1790978 20968
#> + goal12  1   1669907 1814025 21011
#> + goal16  1   1569958 1913975 21189
#> + goal8   1   1204114 2279818 21773
#> + goal13  1   1202593 2281339 21775
#> + goal2   1    945639 2538293 22131
#> + goal17  1    923073 2560859 22160
#> + goal5   1    789814 2694118 22330
#> + goal10  1    664196 2819737 22482
#> + goal15  1     24477 3459455 23164
#> <none>                3483932 23185
#> 
#> Step:  AIC=17919
#> goal1 ~ goal3
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal7   1     60019 657841 17630
#> + goal4   1     39110 678750 17734
#> + goal5   1     38603 679256 17737
#> + goal6   1     19757 698102 17828
#> + goal17  1     10234 707625 17873
#> + goal9   1      8228 709632 17883
#> + goal10  1      7967 709892 17884
#> + goal8   1      3831 714029 17903
#> + goal11  1      3020 714839 17907
#> + goal2   1      1477 716382 17914
#> + goal16  1       508 717351 17919
#> <none>                717859 17919
#> + goal13  1       416 717443 17919
#> + goal12  1       189 717670 17920
#> + goal15  1        82 717777 17921
#> 
#> Step:  AIC=17630
#> goal1 ~ goal3 + goal7
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal5   1     57949 599891 17324
#> + goal4   1     16616 641225 17546
#> + goal10  1     11232 646608 17574
#> + goal6   1      8419 649421 17589
#> + goal9   1      6089 651752 17601
#> + goal17  1      4175 653665 17611
#> + goal2   1      3286 654555 17615
#> + goal8   1      2750 655090 17618
#> + goal13  1      2167 655674 17621
#> + goal11  1       973 656867 17627
#> + goal12  1       451 657389 17629
#> <none>                657841 17630
#> + goal16  1       393 657447 17630
#> + goal15  1        62 657778 17631
#> 
#> Step:  AIC=17324
#> goal1 ~ goal3 + goal7 + goal5
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal4   1     45030 554861 17066
#> + goal6   1     27671 572220 17169
#> + goal17  1     15477 584414 17239
#> + goal13  1     11327 588564 17263
#> + goal12  1      6893 592998 17288
#> + goal10  1      4284 595607 17302
#> + goal15  1      2513 597378 17312
#> + goal11  1      1048 598843 17320
#> <none>                599891 17324
#> + goal8   1       194 599697 17325
#> + goal16  1        77 599814 17326
#> + goal9   1         8 599883 17326
#> + goal2   1         0 599891 17326
#> 
#> Step:  AIC=17066
#> goal1 ~ goal3 + goal7 + goal5 + goal4
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal6   1     24722 530139 16916
#> + goal17  1     19416 535446 16949
#> + goal13  1     12121 542740 16994
#> + goal12  1      8386 546476 17017
#> + goal10  1      6390 548471 17029
#> + goal15  1      5764 549097 17033
#> + goal16  1      1906 552955 17057
#> + goal2   1       394 554467 17066
#> <none>                554861 17066
#> + goal9   1       281 554581 17066
#> + goal8   1        78 554784 17068
#> + goal11  1        46 554816 17068
#> 
#> Step:  AIC=16916
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal17  1     16671 513468 16811
#> + goal13  1      7613 522527 16870
#> + goal10  1      5870 524269 16881
#> + goal12  1      4090 526050 16892
#> + goal2   1      3678 526462 16895
#> + goal15  1      1646 528494 16908
#> + goal8   1      1468 528671 16909
#> + goal16  1       595 529545 16914
#> + goal9   1       451 529688 16915
#> <none>                530139 16916
#> + goal11  1       132 530007 16917
#> 
#> Step:  AIC=16811
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal13  1      8756 504712 16756
#> + goal10  1      8062 505406 16761
#> + goal12  1      5404 508064 16778
#> + goal2   1      2590 510878 16797
#> + goal15  1      1245 512223 16805
#> + goal11  1       747 512721 16809
#> + goal9   1       551 512917 16810
#> <none>                513468 16811
#> + goal8   1       237 513231 16812
#> + goal16  1         6 513462 16813
#> 
#> Step:  AIC=16756
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal9   1      4854 499858 16726
#> + goal10  1      4003 500708 16732
#> + goal2   1      1542 503170 16748
#> + goal11  1      1078 503633 16751
#> + goal8   1      1077 503634 16751
#> + goal15  1       793 503918 16753
#> + goal16  1       672 504040 16754
#> <none>                504712 16756
#> + goal12  1         4 504707 16758
#> 
#> Step:  AIC=16726
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal10  1      6236 493622 16686
#> + goal11  1      1413 498446 16718
#> + goal15  1       923 498935 16722
#> + goal2   1       622 499237 16724
#> + goal12  1       387 499471 16725
#> <none>                499858 16726
#> + goal16  1       248 499611 16726
#> + goal8   1       231 499627 16726
#> 
#> Step:  AIC=16686
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal16  1      1230 492392 16680
#> + goal2   1       810 492811 16683
#> + goal11  1       798 492824 16683
#> + goal8   1       538 493084 16684
#> <none>                493622 16686
#> + goal15  1       184 493438 16687
#> + goal12  1       154 493467 16687
#> 
#> Step:  AIC=16680
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10 + goal16
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal2   1       769 491623 16676
#> + goal12  1       744 491648 16677
#> + goal8   1       481 491911 16678
#> + goal11  1       377 492015 16679
#> + goal15  1       300 492092 16680
#> <none>                492392 16680
#> 
#> Step:  AIC=16676
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10 + goal16 + goal2
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal12  1       448 491174 16675
#> + goal11  1       378 491244 16676
#> + goal15  1       314 491309 16676
#> <none>                491623 16676
#> + goal8   1       250 491373 16677
#> 
#> Step:  AIC=16675
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10 + goal16 + goal2 + goal12
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal11  1       380 490794 16675
#> + goal8   1       332 490843 16675
#> <none>                491174 16675
#> + goal15  1       211 490963 16676
#> 
#> Step:  AIC=16675
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10 + goal16 + goal2 + goal12 + goal11
#> 
#>          Df Sum of Sq    RSS   AIC
#> + goal8   1       304 490491 16675
#> <none>                490794 16675
#> + goal15  1       156 490638 16676
#> 
#> Step:  AIC=16675
#> goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + goal17 + goal13 + 
#>     goal9 + goal10 + goal16 + goal2 + goal12 + goal11 + goal8
#> 
#>          Df Sum of Sq    RSS   AIC
#> <none>                490491 16675
#> + goal15  1       194 490296 16675
plot(step_1)
summary(step_1)
#> 
#> Call:
#> lm(formula = goal1 ~ goal3 + goal7 + goal5 + goal4 + goal6 + 
#>     goal17 + goal13 + goal9 + goal10 + goal16 + goal2 + goal12 + 
#>     goal11 + goal8, data = goals_data)
#> 
#> Residuals:
#>    Min     1Q Median     3Q    Max 
#> -59.18  -6.46   0.89   7.37  36.15 
#> 
#> Coefficients:
#>             Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)  5.54550    4.04058    1.37   0.1700    
#> goal3        0.71771    0.02726   26.33  < 2e-16 ***
#> goal7        0.29006    0.02094   13.85  < 2e-16 ***
#> goal5       -0.45375    0.01895  -23.95  < 2e-16 ***
#> goal4        0.27844    0.01620   17.19  < 2e-16 ***
#> goal6        0.35685    0.02887   12.36  < 2e-16 ***
#> goal17       0.25565    0.02182   11.72  < 2e-16 ***
#> goal13      -0.10712    0.02498   -4.29  1.9e-05 ***
#> goal9       -0.10962    0.01817   -6.03  1.8e-09 ***
#> goal10       0.06491    0.00965    6.73  2.1e-11 ***
#> goal16      -0.07634    0.02721   -2.81   0.0051 ** 
#> goal2       -0.03718    0.02725   -1.36   0.1726    
#> goal12      -0.07191    0.03803   -1.89   0.0587 .  
#> goal11      -0.03894    0.02522   -1.54   0.1227    
#> goal8       -0.04985    0.03477   -1.43   0.1517    
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 12.2 on 3320 degrees of freedom
#> Multiple R-squared:  0.859,  Adjusted R-squared:  0.859 
#> F-statistic: 1.45e+03 on 14 and 3320 DF,  p-value: <2e-16

Code
goals_data <- data_4 %>%
  dplyr::select(overallscore, goal1, goal2, goal3, goal4, goal5,
                goal6,goal7, goal8, goal9, goal10, goal11, goal12,
                goal13, goal15, goal16, goal17)
Code



leaps_O <- regsubsets(overallscore ~ .,data=goals_data,nbest=20, method="backward")
plot(leaps_O,scale="adjr2") + theme_minimal()
#> NULL

Code

library(leaps)
leaps_1 <- regsubsets(goal1 ~ goal2 + goal3 + goal4 + goal5 + goal6 + goal7 + goal8 + goal9 + goal10 + goal11 + goal12 + goal13 + goal15 + goal16 + goal17,data=goals_data,nbest=17, method="backward")
plot(leaps_1,scale="r2") + theme_minimal()
#> NULL

Code

fit_2 <- lm(goal2 ~ goal1 + goal3 + goal4 + goal5 + goal6 + goal7 + goal8 + goal9 + goal10 + goal11 + goal12 + goal13 + goal15 + goal16 + goal17, data = goals_data)
plot(fit_2)

leaps_2 <- regsubsets(goal2 ~ goal1 + goal3 + goal4 + goal5 + goal6 + goal7 + goal8 + goal9 + goal10 + goal11 + goal12 + goal13 + goal15 + goal16 + goal17,data=goals_data, nbest=10, method="backward")
plot(leaps_2,scale="adjr2")

3.3 Focus on the evolution of SDG scores over time

How has the adoption of the SDGs in 2015 influenced the achievement of SDGs?

We create one new variable per goal that captures the difference in SDG score between the year of the observation and the previous year. This will allow us to see how the countries improve (or not) on SDG scores each year.

3.3.1 EDA: General time evolution of SDG socres

First, we look at the evolution of SDG achievement overall score over time by continent and by region and we see that the general evolution of SDG scores around the world is increasing over the years, but very slowly. Looking at the continents, we see that Europe is above the others, while Africa is below, but in general, all have increasing overall scores.

This view that groups the countries by region gives us precision about the previous information. Indeed, it is Western Europe that is particularly above and Sub-Saharan Africa that is clearly below.

Second, we look at the evolution of SDG achievement scores(16) over time for the whole world and by continent. We notice that all SDGs except from goal 9 (industry innovation and infrastructure) are close to one another in terms of level and growth. Goal 9 starts far below the others in 2000 and growths faster until exceeding 50%. In addition, some goals did not increase their scores much in the last two decades, for example goal 13 (climate action) and goal 12 (responsible consumption and production).

We continue with the graph that distinguishes continents to get more information.

We observe that most of the time, Europe is at the top of the graph and Africa at the bottom, except for goals 12 and 13 that are linked to ecology. Some other information stand out:

  • Americas are far behind the other parts of the world regarding goal 10: reduced inequalities.

  • Africa is far behind the other continents (even if becoming better) for goals 1, 3, 4 and 7.

  • Goal 9 (industry, innovation and infrastructure) show exponential growth for almost all continents.

Third we create an interactive map of the world to be able to navigate from year 2000 to 2022, seeing the level of achievement of the SDGs (overall score) for each country.

Again, we see that the overall achievement score of the SDGs is increasing and that the countries that have the most red (bad score) are in Africa. However it is also there that it increases more rapidly. Our hypothesis is that when a score is very low, it is easier to make it better than when it becomes very high (around 90%) it may be hard to increase it, because it would mean perfection. In the next section, we will further investigate this idea.

3.3.2 Analysis: SDG adoption in 2015

Preparing for the specific question around 2015, we only keep the years from 2009 to 2022 (7 years before and after 2015).In addition, we create a binary variable that take the value 1 if the observation occurred after 2015 and zero otherwise.

We begin by looking at the distribution of the difference in SDG scores from one year to the next (improvement if it is above zero and deterioration if it is below zero).

Code
# histogram of difference in scores between years
unique_years <- unique(binary2015$year)
plot_ly() %>%
  add_trace(
    type = "histogram", 
    data = binary2015, 
    x = ~diff_overallscore[year == 2009],
    marker = list(color = "lightgreen", line = list(color = "black", width = 1))
  ) %>%
  layout(
    title = "Distribution of SDG evolution",
    xaxis = list(title = "Year difference SDG score", range = c(-3, 3)),
    yaxis = list(title = "Frequency", range = c(0, 40)),
    sliders = list(
      list(
        active = 0,
        currentvalue = list(prefix = "Year: "),
        steps = lapply(seq_along(unique_years), function(i) {
          year <- unique_years[i]
          list(
            label = as.character(year),
            method = "restyle",
            args = list(
              list(x = list(binary2015$diff_overallscore[binary2015$year == year]))
            )
          )
        })
      )
    )
  )

We notice that across the years, the distribution stays on the right of the x-axis, which means that there are more improvement than deterioration. If there is deterioration, it is less than one percent per year, except some extreme cases, for instance in 2013, there was almost a 3% decrease in the overall SDG score of one country. It is also rare to see improvements of more than 2% per year. Regarding our specific question, we do not see a major improvement of the distribution after 2015, if it was the case we would see the distribution going more to the right, but except for 2017, there are more and more values centered around zero, which means less score improvements overall.

After having visualized the improvements and declines of SDG overall score for the whole world, we are now interested in the top 5 countries in terms of improvement each year and we see that major improvement often comes from Sub-Saharan Africa countries or Middle East and North Africa. This confirms that more efforts are made in these regions to achieve better scores, but we also know from our previous visualizations that their initial scores are lower. Moreover, we record that the higher improvements are of 3% per year and were mostly achieved before 2015. Indeed, we can tell that in terms of maximum improvements, the adoption of SDGs in 2015 did not have a strong impact. We also notice that 2020 is the year with the smallest best improvements. We keep that in mind for the next question regarding events and specifically COVID.

We continue by looking at the worst 5 countries in terms of decline in SDG overall score each year and we see that the years with the worst declines are those closer to us. Indeed the declines were generally no more than 1%, until 2018, where these became more frequent. We notice that the adoption of SDGs in 2015 may have had a good impact, because during the two years that follow, the worst SDG score declines were low (no more than 1% in 2016 and no more 0.5% in 2017). It was stabilizing, but it was of short duration, because then come the more extreme deteriotations. Interestingly, the regions that had were the worst in terms of decline during the past twelve years were very different, the only pattern appears during the last four years, where most of them are in Latin America and the Caribbean.

We move on to the specific SDG scores and look at the 20 best improvements by score. We additionaly differentiate between the improvements than occurred before and after 2015. We want to see which goals get the best improvements and which countries put more effort into it.

We notice various patterns, among them:

  • Goals 2 (zero hunger), 3 (good health and well-being), 6 (clean water and sanitation), 8 (decent work and economic growth), 12 (responsible consumption and production), 16 (peace, justice and strong institutions) have very low improvements per year. Indeed, even the best ones are below 10%.

  • Goal 10 (reduced inequalities) has the best improvements, all 20 best improvements are above 20% and it goes up to 45%.

  • Some goals clearly had most of their best improvements before 2015: goals 3 (good health and well-being), 5 (gender equality), 6 (clean water and sanitation), 7 (affordable and clean energy).

  • Some goals clearly had most of their best improvements after 2015: goals 8 (decent work and economic growth), 12 (responsible consumption and production).

  • Goal 9 (industry, innovation and infrastructure) has all of its 20 best improvements after 2015.

Regarding the impact of the adoption of SDGs in 2015, we can not tell that it had a positive impact, because there are not more big improvements after 2015 than before, even a little bit less. In addition, the most impressive improvements mostly occurred before 2015. These conclusions are supported by the next graph: we fit two different regression lines (before and after 2015) to see if there is a jump after the adoption of the SDGs and if the the SDG scores increase faster. We decided to cut the y-axis in order to have a better visual of the different scores. Since the regressions lines (taking into account all of the goals) go between 30% and 85% we only kept those values.

Code
# Graphs to show the jump (or not) in 2015

# Filter data
data_after_2015 <- filter(binary2015, as.numeric(year) >= 2015)
data_before_2016 <- filter(binary2015, as.numeric(year) <= 2015)

plotly::plot_ly() %>%
  plotly::add_trace(data = data_after_2015, x = ~year, y = ~fitted(lm(overallscore ~ year, data = data_after_2015)), type = 'scatter', mode = 'lines', line = list(color = 'blue'), name = "After 2015") %>%
  plotly::add_trace(data = data_before_2016, x = ~year, y = ~fitted(lm(overallscore ~ year, data = data_before_2016)), type = 'scatter', mode = 'lines', line = list(color = 'red'), name = "Before 2015") %>%
  plotly::layout(title = "Different patterns across SDGs before and after 2015",
         xaxis = list(title = "Year"),
         yaxis = list(title = "SDG achievement score", range = c(30, 90)),
         shapes = list(
           list(
             type = 'line',
             x0 = 2015,
             x1 = 2015,
             y0 = 0,
             y1 = 1,
             yref = 'paper',
             line = list(color = 'grey', width = 2, dash = 'dot')
           )
         ),
         updatemenus = list(
           list(
             buttons = list(
               list(
                 args = list("y", list(
                   ~fitted(lm(overallscore ~ year, data = data_after_2015)),
                   ~fitted(lm(overallscore ~ year, data = data_before_2016))
                 )),
                 label = "Overall score",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal1 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal1 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 1: \nno poverty",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal2 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal2 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 2: \nzero hunger",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal3 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal3 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 3: good health \nand well-being",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal4 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal4 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 4: \nquality education",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal5 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal5 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 5: \ngender equality",
                 method = "restyle"
               ), 
               list(
                 args = list("y", list(
                   ~fitted(lm(goal6 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal6 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 6: clean water \nand sanitation",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal7 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal7 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 7: affordable \nand clean energy",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal8 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal8 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 8: decent work \nand economic growth",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal9 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal9 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 9: industry, innovation \nand infrastructure",
                 method = "restyle"
               ), 
               list(
                 args = list("y", list(
                   ~fitted(lm(goal10 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal10 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 10: \nreduced inequalities",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal11 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal11 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 11: sustainable \ncities and communities",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal12 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal12 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 12: responsible \nconsumption and production",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal13 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal13 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 13: \nclimate action",
                 method = "restyle"
               ), 
               list(
                 args = list("y", list(
                   ~fitted(lm(goal15 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal15 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 15: \nlife on earth",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal16 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal16 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 16: peace, justice \nand strong institutions",
                 method = "restyle"
               ),
               list(
                 args = list("y", list(
                   ~fitted(lm(goal17 ~ year, data = data_after_2015)),
                   ~fitted(lm(goal17 ~ year, data = data_before_2016))
                 )),
                 label = "Goal 17: partnerships \nfor the goals",
                 method = "restyle"
               )
             )
           )
         )
  )

We notice various patterns, among them:

  • Goals 1, 4, 3, and 15 increase faster before 2015 than after.

  • Except for goal 17, none seem to increase faster after the adoption of SDGs. Since goal 17 is about collaboration of the countries for SDGs achievement, it is no surprise that before the adoption, there were no increase. It is thus disappointing to see that it is the only goal that has a improvement rate after 2015.

  • Goal 17 also has a small downward jump in 2015, but since it immediately increases in the following years, it is due to the fitting of the lines.

  • We observe small upwards jumps for goals 8, 9, 10 and 11.

To sum up, the adoption of SDGs was a success in terms of collaboration between countries to better themselves on some aspects of durability (goal 17), but regarding the goals themselves, we can not conclude to faster improvements or radical efforts following 2015.

3.4 Focus on the influence of events over the SDG scores

In order to have an overview of the relationship between the different events variables and the SDG overall score, we make several graphs containing the Pearson correlation coefficient between the variable, the scatter plots describing the relationship between the variables, as well as the distribution of each variable.

Code

lower.panel <- function(x, y, ...){
   points(x, y, pch = 20, col = "darkgreen", cex = 0.2)
}
 
 panel.hist <- function(x, ...){
   usr <- par("usr"); on.exit(par(usr))
   par(usr = c(usr[1:2], 0, 1.5) )
   h <- hist(x, plot = FALSE)
   breaks <- h$breaks; nB <- length(breaks)
   y <- h$counts; y <- y/max(y)
   rect(breaks[-nB], 0, breaks[-1], y, col = "lightgreen", ...)
 }
 
 # panel.cor_stars function with stars alongside correlation coefficients
 panel.cor_stars <- function(x, y, digits = 2, prefix = "", cex.cor, ...) {
   usr <- par("usr"); on.exit(par(usr))
   par(usr = c(0, 1, 0, 1))
   r <- cor(x, y)
   p_value <- cor.test (x,y)$p.value
 
   if (p_value < 0.001){
     stars <- "***"
   } else if (p_value < 0.01) {
     stars <- "**"
   } else if (p_value < 0.05) {
     stars <- "*"
   } else {
     stars <- ""
   }
   txt <- paste0(format(c(r, 0.123456789), digits = digits)[1], " ", stars)
   if(missing(cex.cor)) cex.cor <- 0.5/strwidth(txt)
   text(0.5, 0.5, txt, cex = cex.cor)
 }
 

pairs(data_question3_1[, c("overallscore", "total_affected", "total_deaths")], upper.panel = panel.cor_stars,diag.panel = panel.hist,lower.panel = lower.panel, main = "Correlation table and distribution of Disaster variables")

Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.

The different variables used to materialize the impact of climate disasters do not seem to have important influence on the overall score. Indeed, the overallscore and total_affected have a correlation coefficient that suggests a very weak negative linear relationship between this variables and which is not statistically significant (p ≥ 0.05), and the overallscore and total_deaths have a correlation that also indicates a weak negative linear relationship that is statistically significant at p < 0.05. But we will further explore for the different SDGs, since we believe that such disasters have a specific influence on some SDGs.

Code
pairs(data_question3_2[,c("overallscore", "cases_per_million", "deaths_per_million", "stringency")], upper.panel = panel.cor_stars, diag.panel=panel.hist, lower.panel = lower.panel,main="Correlation table and distribution of COVID variables")

Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.

The different variables used to materialize the impact of COVID19 do not seem to have important influence on the overall score, we can see that Overallscore and cases_per_million/deaths_per_million/stringency have a correlation coefficient indicating a weak positive linear relationship that is highly statistically significant at p < 0.001. But we will further explore for the different SDGs, since we believe that COVID19 had a specific influence on some SDGs, for instance “good health and well-being” or “decent work and economic growth”.

Concerning the correlation effect between the COVID19 variables, we could have no surprises, Cases_per_million and deaths_per_million have a moderate to strong positive correlation suggesting a stronger relationship where an increase in the number of COVID-19 cases per million is associated with a substantial increase in the number of deaths per million. This indicates a significant correlation between case prevalence and mortality rate. Cases_per_million and stringency have a moderate positive correlation indicates that higher levels of cases per million are associated with slightly higher severity of health measures. This could mean that in regions where cases are more numerous, stricter sanitary measures can be put in place to control the spread of the virus. Finally, Deaths_per_million and stringency have a strong positive correlation indicating a robust relationship where higher mortality rates are associated with higher severity of sanitary measures. This suggest that in areas where deaths are higher, stricter sanitary measures are applied in an attempt to reduce the spread of the virus and mortality.

Code
pairs(data_question3_3[,c("overallscore", "ongoing", "sum_deaths", "pop_affected", "area_affected", "maxintensity")], upper.panel = panel.cor_stars, diag.panel=panel.hist, lower.panel = lower.panel, main="Correlation table and distribution of conflicts variables")

Meaning of the stars: *** : p_value < 0.001; ** : p_value < 0.01; *: p_value <0.05; no star if p_value is higher.

Negative values (ranging from -0.17 to -0.28) with three stars (***) indicate a strong and statistically significant negative correlation between the overall index (Overallscore) and the various conflict-related variables (Ongoing, sum_deaths, pop_affected, area_affected, maxintensity). A strong negative correlation means that an increase in the Overallscore is associated with a decrease in the values of the other variables. But we have to take into account that correlation does not necessarily imply direct causation. Further analysis may be required to understand in depth the nature of the relationships between these variables.

To explore our data on events such as disasters, covid-19 and conflicts we have to first see which countries are the most touched by these. To do so, we made time-series analysis on this three events each time depending on different variables.

Code
# Converted 'year' column to date format
Q3.1$year <- as.Date(as.character(Q3.1$year), format = "%Y")
Q3.2$year <- as.Date(as.character(Q3.2$year), format = "%Y")
Q3.3$year <- as.Date(as.character(Q3.3$year), format = "%Y")

These is our time-analysis concerning the COVID-19 cases per million by region between end 2018 and 2022.

Code
covid_filtered <- Q3.2[Q3.2$year >= as.Date("2018-12-12"), ]

ggplot(data = covid_filtered, aes(x = year, y = cases_per_million, group = region, color = region)) +
  geom_smooth(method = "loess", se = FALSE, span = 0.8, size = 0.6) + 
  labs(x = "Year", y = "Cases per Million") +
  facet_wrap(~ region, ncol = 3, strip.position = "top") +
  scale_y_continuous(labels = function(x) format(x, scientific = FALSE)) +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        plot.title = element_text(hjust = 0.5),
        panel.spacing = unit(1, "lines"),
        legend.position = "none"
  ) +
  ggtitle("Trend of COVID-19 Cases per Million Over Time")

These is our time-analysis concerning the COVID-19 deaths per million per region between end 2018 and 2022

Code

ggplot(data = covid_filtered, aes(x = year, y = deaths_per_million, group = region, color = region)) +
  geom_smooth(method = "loess", se = FALSE, span = 0.8, size = 0.6) + 
  labs(x = "Year", y = "Deaths per Million") +
  facet_wrap(~ region, nrow = 5, strip.position = "top") +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        panel.spacing = unit(0.5, "lines"),
        plot.title = element_text(hjust = 0.5),
        legend.position = "none"
  ) +
  ggtitle("Trend of COVID-19 Deaths per Million Over Time")

These is our time-analysis concerning the COVID-19 stringency per region between end 2018 and 2022

Code
ggplot(data = covid_filtered, aes(x = year, y = stringency, group = region, color = region)) +
  geom_smooth(method = "loess",  se = FALSE, span = 0.7, size = 0.6) + 
  labs(x = "Year", y = "Stringency") +
  facet_wrap(~ region, nrow = 5) +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        panel.spacing = unit(0.5, "lines"),
        plot.title = element_text(hjust = 0.5),
        legend.position = "none"
  ) +
  ggtitle("Trend of COVID-19 Stringency Over Time")

These is our time-analysis concerning climatic disasters with total affected per region

Code
Q3.1[is.na(Q3.1)] <- 0
ggplot(data = Q3.1, aes(x = year, y = total_affected, group = region, color = region)) +
  geom_smooth(method = "loess",  se = FALSE, span = 0.7, size = 0.5) + 
  labs(x = "Year", y = "Total Affected") +
  facet_wrap(~ region, nrow = 5) +
  scale_y_continuous(labels = function(x) format(x, scientific = FALSE)) +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        panel.spacing = unit(0.5, "lines"),
        plot.title = element_text(hjust = 0.5),
        legend.position = "none"
  ) +
  ggtitle("Trend of Total Affected from Climatic Disasters Over Time")

These is our time-analysis concerning conflicts deaths per region between 2000 and 2016

Code
conflicts_filtered <- Q3.3[Q3.3$year >= as.Date("2000-01-01") & Q3.3$year <= as.Date("2016-12-31"), ]

ggplot(data = conflicts_filtered, aes(x = year, y = sum_deaths, group = region, color = region)) +
  geom_smooth(method = "loess", se = FALSE, span = 0.3, size = 0.6) +
  labs(x = "Year", y = "Sum 0f Deaths") +
  facet_wrap(~ region, nrow = 5) +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        panel.spacing = unit(0.5, "lines"),
        plot.title = element_text(hjust = 0.5),
        legend.position = "none"
  ) +
  ggtitle("Trend of Deaths by Conflicts Over Time")

We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, then less America & the Caribbean and Eastern Europe

These is our time-analysis concerning conflicts affected population per region between 2000 and 2016

Code
ggplot(data = conflicts_filtered, aes(x = year, y = pop_affected, group = region, color = region)) +
  geom_smooth(method = "loess", se = FALSE, span = 0.3, size = 0.6) + 
  labs(x = "Year", y = "Population affected") +
  facet_wrap(~ region, nrow = 5) +
  theme(axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
        axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
        strip.text = element_text(size = 8),
        panel.spacing = unit(0.5, "lines"),
        plot.title = element_text(hjust = 0.5),
        legend.position = "none"
  ) +
  ggtitle("Trend of Population Affected by Conflicts Over Time")

We can see that the regions’ the most affected by the conflicts are : Middle east and north Africa, Sub-Saharan Africa, South Asia, America & the Caribbean, Eastern Europe and sometimes Caucasus and Central Asia

Now that we could visualize which regions are the most impacted by these three events we can do correlations analysis per region to see if this events have indeed an impact on the evolution of SDG goals.

Here we want to analyse the correlation between the climate disasters and the SDG goals in South and East Asia.

Code
Q3.1[is.na(Q3.1)] <- 0

selected_regions <- c("South Asia", "East Asia", "North America")
disaster_selected <- Q3.1[Q3.3$region %in% selected_regions, ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "no_homeless")

correlation_matrix_disaster <- cor(disaster_selected[, relevant_columns], use = "complete.obs")

kable(correlation_matrix_disaster)
goal1 goal2 goal3 goal4 goal5 goal6 goal7 goal8 goal9 goal10 goal11 goal12 goal13 goal15 goal16 total_affected no_homeless
goal1 1.000 0.658 0.737 0.741 0.622 0.810 0.698 0.472 0.645 0.613 0.721 -0.556 -0.403 0.323 0.546 -0.011 -0.044
goal2 0.658 1.000 0.757 0.799 0.592 0.742 0.746 0.615 0.720 0.386 0.767 -0.608 -0.435 0.137 0.657 -0.027 -0.042
goal3 0.737 0.757 1.000 0.855 0.642 0.879 0.902 0.742 0.851 0.587 0.815 -0.797 -0.627 0.134 0.803 -0.027 -0.056
goal4 0.741 0.799 0.855 1.000 0.686 0.793 0.817 0.690 0.699 0.545 0.875 -0.640 -0.567 0.143 0.705 0.007 -0.008
goal5 0.622 0.592 0.642 0.686 1.000 0.642 0.570 0.460 0.661 0.468 0.709 -0.605 -0.535 0.396 0.636 -0.091 -0.147
goal6 0.810 0.742 0.879 0.793 0.642 1.000 0.867 0.625 0.756 0.648 0.779 -0.642 -0.435 0.312 0.634 -0.061 -0.093
goal7 0.698 0.746 0.902 0.817 0.570 0.867 1.000 0.664 0.720 0.511 0.760 -0.650 -0.496 0.129 0.710 -0.012 -0.034
goal8 0.472 0.615 0.742 0.690 0.460 0.625 0.664 1.000 0.653 0.482 0.683 -0.643 -0.510 0.061 0.648 -0.018 -0.035
goal9 0.645 0.720 0.851 0.699 0.661 0.756 0.720 0.653 1.000 0.624 0.662 -0.789 -0.604 0.190 0.743 -0.032 -0.044
goal10 0.613 0.386 0.587 0.545 0.468 0.648 0.511 0.482 0.624 1.000 0.525 -0.508 -0.367 0.503 0.491 -0.043 -0.044
goal11 0.721 0.767 0.815 0.875 0.709 0.779 0.760 0.683 0.662 0.525 1.000 -0.622 -0.530 0.193 0.760 -0.092 -0.109
goal12 -0.556 -0.608 -0.797 -0.640 -0.605 -0.642 -0.650 -0.643 -0.789 -0.508 -0.622 1.000 0.852 -0.209 -0.813 0.091 0.103
goal13 -0.403 -0.435 -0.627 -0.567 -0.535 -0.435 -0.496 -0.510 -0.604 -0.367 -0.530 0.852 1.000 -0.092 -0.671 0.069 0.076
goal15 0.323 0.137 0.134 0.143 0.396 0.312 0.129 0.061 0.190 0.503 0.193 -0.209 -0.092 1.000 0.154 -0.133 -0.161
goal16 0.546 0.657 0.803 0.705 0.636 0.634 0.710 0.648 0.743 0.491 0.760 -0.813 -0.671 0.154 1.000 -0.058 -0.069
total_affected -0.011 -0.027 -0.027 0.007 -0.091 -0.061 -0.012 -0.018 -0.032 -0.043 -0.092 0.091 0.069 -0.133 -0.058 1.000 0.057
no_homeless -0.044 -0.042 -0.056 -0.008 -0.147 -0.093 -0.034 -0.035 -0.044 -0.044 -0.109 0.103 0.076 -0.161 -0.069 0.057 1.000
Code

cor_melted <- as.data.frame(as.table(correlation_matrix_disaster))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
        axis.text.y = element_text(size = 8)) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between the climate disasters and the SDG goals in South and East Asia')

We conclude that climate disasters do not really have a big impact on SDG goals.

Here we want to analyse the correlation between the Covid-19 and the SDG goals only during Covid time.

Code
covid_filtered <- Q3.2[Q3.2$year >= as.Date("2019-01-01"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million")

relevant_data <- covid_filtered[, relevant_columns]

correlation_matrix_Covid <- cor(relevant_data, use = "complete.obs")

kable(correlation_matrix_Covid)
goal1 goal2 goal3 goal4 goal5 goal6 goal7 goal8 goal9 goal10 goal11 goal12 goal13 goal15 goal16 stringency cases_per_million deaths_per_million
goal1 1.000 0.565 0.870 0.792 0.442 0.761 0.802 0.612 0.790 0.504 0.739 -0.651 -0.565 0.109 0.714 0.066 0.339 0.360
goal2 0.565 1.000 0.587 0.578 0.453 0.597 0.504 0.642 0.579 0.265 0.512 -0.348 -0.283 0.105 0.476 0.061 0.208 0.257
goal3 0.870 0.587 1.000 0.840 0.641 0.822 0.850 0.717 0.883 0.470 0.835 -0.786 -0.672 0.159 0.825 0.042 0.408 0.377
goal4 0.792 0.578 0.840 1.000 0.639 0.752 0.814 0.609 0.774 0.331 0.777 -0.643 -0.556 0.070 0.684 0.094 0.342 0.351
goal5 0.442 0.453 0.641 0.639 1.000 0.647 0.603 0.556 0.633 0.098 0.678 -0.640 -0.558 0.215 0.635 0.033 0.326 0.268
goal6 0.761 0.597 0.822 0.752 0.647 1.000 0.762 0.689 0.799 0.365 0.750 -0.703 -0.581 0.253 0.732 0.054 0.385 0.410
goal7 0.802 0.504 0.850 0.814 0.603 0.762 1.000 0.579 0.748 0.329 0.802 -0.650 -0.504 0.128 0.699 0.062 0.337 0.374
goal8 0.612 0.642 0.717 0.609 0.556 0.689 0.579 1.000 0.711 0.389 0.608 -0.646 -0.552 0.281 0.659 -0.013 0.366 0.306
goal9 0.790 0.579 0.883 0.774 0.633 0.799 0.748 0.711 1.000 0.471 0.753 -0.852 -0.759 0.192 0.830 0.065 0.460 0.370
goal10 0.504 0.265 0.470 0.331 0.098 0.365 0.329 0.389 0.471 1.000 0.304 -0.506 -0.483 0.236 0.517 -0.029 0.259 0.127
goal11 0.739 0.512 0.835 0.777 0.678 0.750 0.802 0.608 0.753 0.304 1.000 -0.681 -0.569 0.093 0.764 0.028 0.338 0.336
goal12 -0.651 -0.348 -0.786 -0.643 -0.640 -0.703 -0.650 -0.646 -0.852 -0.506 -0.681 1.000 0.876 -0.334 -0.827 0.023 -0.463 -0.302
goal13 -0.565 -0.283 -0.672 -0.556 -0.558 -0.581 -0.504 -0.552 -0.759 -0.483 -0.569 0.876 1.000 -0.215 -0.696 0.004 -0.368 -0.185
goal15 0.109 0.105 0.159 0.070 0.215 0.253 0.128 0.281 0.192 0.236 0.093 -0.334 -0.215 1.000 0.303 -0.077 0.171 0.228
goal16 0.714 0.476 0.825 0.684 0.635 0.732 0.699 0.659 0.830 0.517 0.764 -0.827 -0.696 0.303 1.000 -0.001 0.425 0.314
stringency 0.066 0.061 0.042 0.094 0.033 0.054 0.062 -0.013 0.065 -0.029 0.028 0.023 0.004 -0.077 -0.001 1.000 0.048 0.392
cases_per_million 0.339 0.208 0.408 0.342 0.326 0.385 0.337 0.366 0.460 0.259 0.338 -0.463 -0.368 0.171 0.425 0.048 1.000 0.417
deaths_per_million 0.360 0.257 0.377 0.351 0.268 0.410 0.374 0.306 0.370 0.127 0.336 -0.302 -0.185 0.228 0.314 0.392 0.417 1.000
Code

cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
        axis.text.y = element_text(size = 8)) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between COVID and the SDG goals')

Same conclusion, really weird.

Here we want to analyse the correlation between conflicts deaths and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean and Eastern Europe regions.

Code

selected_regions <- c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe")
conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "sum_deaths", "maxintensity")

correlation_matrix_Conflicts_Deaths <- cor(conflicts_selected[, relevant_columns], use = "complete.obs")

kable(correlation_matrix_Conflicts_Deaths)
goal1 goal2 goal3 goal4 goal5 goal6 goal7 goal8 goal9 goal10 goal11 goal12 goal13 goal15 goal16 sum_deaths maxintensity
goal1 1.000 0.449 0.907 0.793 0.402 0.801 0.864 0.549 0.722 0.260 0.781 -0.733 -0.592 0.034 0.594 -0.094 -0.150
goal2 0.449 1.000 0.517 0.501 0.537 0.611 0.516 0.538 0.517 0.102 0.458 -0.351 -0.319 0.159 0.407 -0.168 -0.228
goal3 0.907 0.517 1.000 0.815 0.497 0.827 0.870 0.582 0.765 0.223 0.829 -0.743 -0.580 0.010 0.661 -0.119 -0.172
goal4 0.793 0.501 0.815 1.000 0.635 0.747 0.805 0.532 0.694 0.085 0.768 -0.669 -0.528 0.002 0.489 -0.101 -0.151
goal5 0.402 0.537 0.497 0.635 1.000 0.584 0.539 0.454 0.515 -0.185 0.612 -0.459 -0.355 0.194 0.361 -0.159 -0.242
goal6 0.801 0.611 0.827 0.747 0.584 1.000 0.813 0.674 0.733 0.123 0.783 -0.715 -0.530 0.182 0.574 -0.163 -0.255
goal7 0.864 0.516 0.870 0.805 0.539 0.813 1.000 0.541 0.721 0.140 0.837 -0.705 -0.533 0.038 0.544 -0.092 -0.158
goal8 0.549 0.538 0.582 0.532 0.454 0.674 0.541 1.000 0.605 0.178 0.530 -0.523 -0.392 0.175 0.419 -0.097 -0.164
goal9 0.722 0.517 0.765 0.694 0.515 0.733 0.721 0.605 1.000 0.292 0.697 -0.757 -0.689 0.135 0.575 -0.077 -0.114
goal10 0.260 0.102 0.223 0.085 -0.185 0.123 0.140 0.178 0.292 1.000 0.037 -0.283 -0.287 0.115 0.289 0.075 0.104
goal11 0.781 0.458 0.829 0.768 0.612 0.783 0.837 0.530 0.697 0.037 1.000 -0.727 -0.565 0.029 0.650 -0.155 -0.253
goal12 -0.733 -0.351 -0.743 -0.669 -0.459 -0.715 -0.705 -0.523 -0.757 -0.283 -0.727 1.000 0.860 -0.162 -0.645 0.121 0.227
goal13 -0.592 -0.319 -0.580 -0.528 -0.355 -0.530 -0.533 -0.392 -0.689 -0.287 -0.565 0.860 1.000 -0.150 -0.472 0.077 0.113
goal15 0.034 0.159 0.010 0.002 0.194 0.182 0.038 0.175 0.135 0.115 0.029 -0.162 -0.150 1.000 0.183 -0.061 -0.135
goal16 0.594 0.407 0.661 0.489 0.361 0.574 0.544 0.419 0.575 0.289 0.650 -0.645 -0.472 0.183 1.000 -0.163 -0.255
sum_deaths -0.094 -0.168 -0.119 -0.101 -0.159 -0.163 -0.092 -0.097 -0.077 0.075 -0.155 0.121 0.077 -0.061 -0.163 1.000 0.398
maxintensity -0.150 -0.228 -0.172 -0.151 -0.242 -0.255 -0.158 -0.164 -0.114 0.104 -0.253 0.227 0.113 -0.135 -0.255 0.398 1.000
Code

cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Deaths))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
        axis.text.y = element_text(size = 8)) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between Conflicts deaths and the SDG goals')

Finally, we want to analyse the correlation between conflicts affected population and the SDG goals only for the Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean, Eastern Europe regions and Caucasus and Central Asia.

Code

# Filter data for specific regions (pop_affected)
selected_regions <- c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe","Caucasus and Central Asia")
conflicts_selected <- Q3.3[Q3.3$region %in% selected_regions, ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected")

correlation_matrix_Conflicts_Pop_Affected <- cor(conflicts_selected[, relevant_columns], use = "complete.obs")

kable(correlation_matrix_Conflicts_Pop_Affected)
goal1 goal2 goal3 goal4 goal5 goal6 goal7 goal8 goal9 goal10 goal11 goal12 goal13 goal15 goal16 pop_affected
goal1 1.000 0.449 0.907 0.793 0.402 0.801 0.864 0.549 0.722 0.260 0.781 -0.733 -0.592 0.034 0.594 -0.066
goal2 0.449 1.000 0.517 0.501 0.537 0.611 0.516 0.538 0.517 0.102 0.458 -0.351 -0.319 0.159 0.407 -0.078
goal3 0.907 0.517 1.000 0.815 0.497 0.827 0.870 0.582 0.765 0.223 0.829 -0.743 -0.580 0.010 0.661 -0.061
goal4 0.793 0.501 0.815 1.000 0.635 0.747 0.805 0.532 0.694 0.085 0.768 -0.669 -0.528 0.002 0.489 -0.032
goal5 0.402 0.537 0.497 0.635 1.000 0.584 0.539 0.454 0.515 -0.185 0.612 -0.459 -0.355 0.194 0.361 -0.146
goal6 0.801 0.611 0.827 0.747 0.584 1.000 0.813 0.674 0.733 0.123 0.783 -0.715 -0.530 0.182 0.574 -0.104
goal7 0.864 0.516 0.870 0.805 0.539 0.813 1.000 0.541 0.721 0.140 0.837 -0.705 -0.533 0.038 0.544 -0.068
goal8 0.549 0.538 0.582 0.532 0.454 0.674 0.541 1.000 0.605 0.178 0.530 -0.523 -0.392 0.175 0.419 -0.092
goal9 0.722 0.517 0.765 0.694 0.515 0.733 0.721 0.605 1.000 0.292 0.697 -0.757 -0.689 0.135 0.575 0.001
goal10 0.260 0.102 0.223 0.085 -0.185 0.123 0.140 0.178 0.292 1.000 0.037 -0.283 -0.287 0.115 0.289 0.068
goal11 0.781 0.458 0.829 0.768 0.612 0.783 0.837 0.530 0.697 0.037 1.000 -0.727 -0.565 0.029 0.650 -0.104
goal12 -0.733 -0.351 -0.743 -0.669 -0.459 -0.715 -0.705 -0.523 -0.757 -0.283 -0.727 1.000 0.860 -0.162 -0.645 0.106
goal13 -0.592 -0.319 -0.580 -0.528 -0.355 -0.530 -0.533 -0.392 -0.689 -0.287 -0.565 0.860 1.000 -0.150 -0.472 0.018
goal15 0.034 0.159 0.010 0.002 0.194 0.182 0.038 0.175 0.135 0.115 0.029 -0.162 -0.150 1.000 0.183 -0.105
goal16 0.594 0.407 0.661 0.489 0.361 0.574 0.544 0.419 0.575 0.289 0.650 -0.645 -0.472 0.183 1.000 -0.106
pop_affected -0.066 -0.078 -0.061 -0.032 -0.146 -0.104 -0.068 -0.092 0.001 0.068 -0.104 0.106 0.018 -0.105 -0.106 1.000
Code

cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Affected))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, vjust = 1, size = 8, hjust = 1),
        axis.text.y = element_text(size = 8)) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between Conflicts Affected Population and the SDG goals')

3.5 Focus on the correlation between the SDG scores and the different events.

Starting from what we saw with our correlation maps, we concluded that they did not have any big impact on the SDG scores per region.

Here you can see an extract of our correlation map between the climate disasters and the SDG goals in South and East Asia as it was the regions that where the most impacted.

Code

library(ggplot2)

disaster_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia", "North America"), ]
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "total_deaths")
subset_data <- disaster_data[, relevant_columns]

correlation_matrix_subset <- cor(subset_data[, c("total_affected", "total_deaths")], subset_data)

cor_melted <- reshape2::melt(correlation_matrix_subset)
names(cor_melted) <- c("Variable2", "Variable1", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
         axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
         plot.title = element_text(margin = margin(b = 20), hjust = 0.5, 
                                   vjust = 8, lineheight = 1.5)
  ) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between the climate disasters and the SDG goals in South and East Asia and North America')

Here you can see an extract of our correlation map between the COVID-19 and the SDG goals.

Code

covid_filtered <- Q3.2
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million")
subset_data <- covid_filtered[, relevant_columns]

correlation_matrix_Covid <- cor(subset_data, subset_data[, c("stringency", "cases_per_million", "deaths_per_million")])

cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

# Create the heatmap
library(ggplot2)

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
         axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
         plot.title = element_text(margin = margin(b = 20), hjust = 0.5, 
                                   vjust = 5, lineheight = 1.5)
  ) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between COVID and the SDG goals')

Here you can see an extract of our correlation map between the conflicts and the SDG goals in Middle East & North Africa, Sub-Saharan Africa, South Asia, Latin America & the Caribbean, Eastern Europe, Caucasus and Central Asia as it was the regions that where the most impacted. We analysed this regions also for sum_deaths because it has the same touched regions than affected population.

Code

conflicts_filtered <- Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe", "Caucasus and Central Asia"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected", "sum_deaths", "maxintensity")

subset_data <- conflicts_filtered[, relevant_columns]

correlation_matrix_Conflicts_Pop_Aff <- cor(subset_data, subset_data[, c("pop_affected", "sum_deaths", "maxintensity")])

cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Aff))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ggplot(data = cor_melted, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
         axis.text.y = element_text(vjust = 1, size = 8, hjust = 1),
         plot.title = element_text(margin = margin(b = 20), hjust = 0.5, 
                                   vjust = 8, lineheight = 2)
  ) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between Conflicts Affected Population & Deths and the SDG goals')

After looking at almost the same results, we asked ourselves if the fact that we do not see any correlations is because the consequences of this disasters arrive later on, so we decided to remake the same correlations with 1 year gap.

3.5.1 Correlations for each event with one year gap

Here you can see for example our correlation map between the climate disasters and the SDG goals in South and East Asia with one year gap.

Code

disaster_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia", " North America"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "total_deaths")

subset_data <- disaster_data[, relevant_columns]

lagged_subset_data <- subset_data %>%
  mutate(
    lagged_total_affected = lag(total_affected, default = NA),
    lagged_total_deaths = lag(total_deaths, default = NA)
  )

correlation_matrix_lagged <- cor(lagged_subset_data[, c("lagged_total_affected", "lagged_total_deaths")], subset_data)

cor_melted_lagged <- reshape2::melt(correlation_matrix_lagged)
names(cor_melted_lagged) <- c("Variable2", "Variable1", "Correlation")

ggplot(data = cor_melted_lagged, aes(Variable1, Variable2, fill = Correlation)) +
  geom_tile() +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Correlation") +
  theme_minimal() +
  theme( axis.text.x = element_text(angle = 45, size = 8, hjust = 1),
         axis.text.y = element_text(vjust = 1, size = 8, hjust = 2),
         plot.title = element_text(margin = margin(b = 20), hjust = 0.5, 
                                   vjust = 6, lineheight = 1.5),
         legend.title = element_text(size = 8)
  ) +
  coord_fixed() +
  labs(x = '', y = '',
       title = 'Correlation between the climate disasters and the SDG goals in South and East Asia with 1 year gap')+
    theme(plot.title = element_text(size = 8, vjust =12))

Even with a year gap it doesn’t seem that climate disaster with such consequences as the population that gets affected and dies has an impact on the SDG scores as we would have though. But we are still a little bit optimistic and though why not look at the correlations with a gap year over the years.

3.5.2 Interactive map of the correlation between the different events and the SDG goals with 1 year gap.

Here you can see an interactive map of the correlation between the climate disasters and the SDG goals in South and East Asia with 1 year gap. To better understand the results, if we select a specific year (e.g., 2020) in the app, the analysis will show correlations between the SDG scores for the selected year (e.g., 2020) and the disaster-related variables (total_affected and total_deaths) from the previous year (e.g., 2019). results -> nothing?

Code
library(shiny)
library(plotly)

Q3.1 <- Q3.1 %>%
  arrange(code, year) %>%
  group_by(code)

disaster_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia", "North America"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "total_deaths")

subset_data <- disaster_data[, relevant_columns]

lagged_subset_data <- subset_data %>%
  mutate(
    lagged_total_affected = lag(total_affected, default = NA),
    lagged_total_deaths = lag(total_deaths, default = NA)
  )

correlation_matrix_lagged <- cor(lagged_subset_data[, c("lagged_total_affected", "lagged_total_deaths")], subset_data)

cor_melted_lagged <- reshape2::melt(correlation_matrix_lagged)
names(cor_melted_lagged) <- c("Variable2", "Variable1", "Correlation")

ui <- fluidPage(
  titlePanel("Interactive Correlation Heatmap between the climate disasters and the SDG goals in South and East Asia with 1 year gap"),
  plotlyOutput("heatmap"),
  sliderInput("year", "Select Year", min = 2000, max = 2021, value = 2012, step = 1),
  verbatimTextOutput("correlation_output"),
  actionButton("stopButton", "Stop application")
)

server <- function(input, output, session) {
  selected_data <- reactive({
    filtered_data <- disaster_data[disaster_data$year == input$year, ]
    subset_data <- filtered_data[, relevant_columns]
    lagged_subset_data <- subset_data %>%
      mutate(
        lagged_total_affected = lag(total_affected, default = NA),
        lagged_total_deaths = lag(total_deaths, default = NA)
      )
    
    correlation_matrix_lagged <- cor(lagged_subset_data[, c("lagged_total_affected", "lagged_total_deaths")], subset_data)
    
    cor_melted_lagged <- reshape2::melt(correlation_matrix_lagged)
    names(cor_melted_lagged) <- c("Variable2", "Variable1", "Correlation")
    
    return(cor_melted_lagged)
  })
  
  output$heatmap <- renderPlotly({
    p <- plot_ly(data = selected_data(), x = ~Variable1, y = ~Variable2, z = ~Correlation, 
                 type = "heatmap", colorscale = list(c(-1, "blue"), c(0, "white"), c(1, "red")),
                 zmin = -1, zmax = 1)
    
    p <- p %>% layout(
      title = "",
      xaxis = list(title = ""),
      yaxis = list(title = ""),
      coloraxis = list(
        colorbar = list(
          title = "Correlation",  
          tickvals = c(-1, 0, 1),  
          ticktext = c("-1", "0", "1"),
          len = 5,
          thickness = 20,
          x = 0,
          xanchor = "left",
          ticks = "outside" 
        )
      )
    )
    return(p)
  })
  
  observeEvent(input$stopButton, {
    stopApp() 
  })
}

shinyApp(ui = ui, server = server)

Shiny applications not supported in static R Markdown documents

here you can see an interactive map of the correlation between COVID-19 and the SDG goals with 1 year gap. And strangely, instead of having a negative correlation, we expected that the more cases and deaths happened because of COVID-19, the scores of the SDG would be negatively affected,but with the gap year we can see that the scores of the Goal3, Goal6, Goal9 and Goal16 are quite positively impacted by the COVID-19.

Code

library(shiny)
library(plotly)

Q3.2 <- Q3.2 %>%
  arrange(code,year)%>%
  group_by(code)

Q3.2 <- read.csv(here("scripts", "data", "data_question3_2.csv"))

covid_filtered <- Q3.2
relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million")

subset_data <- covid_filtered[, relevant_columns]

correlation_matrix_Covid <- cor(subset_data, subset_data[, c("stringency", "cases_per_million", "deaths_per_million")])

cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ui <- fluidPage(
  titlePanel("Interactive Correlation Heatmap between COVID and the SDG goal with one year gap"),
  plotlyOutput("heatmap"),
  sliderInput("year", "Select Year", min = 2020, max = 2022, value = 2020, step = 1),
  actionButton("stopButton", "Stop application")
)

server <- function(input, output, session) {
  selected_covid_data <- reactive({
    filtered_data <- covid_filtered[covid_filtered$year == input$year, ]
    subset_data <- filtered_data[, relevant_columns]
    return(subset_data)
  })
  
  output$heatmap <- renderPlotly({
    correlation_matrix_Covid <- cor(selected_covid_data(), selected_covid_data()[, c("stringency", "cases_per_million", "deaths_per_million")])
    cor_melted <- as.data.frame(as.table(correlation_matrix_Covid))
    names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
    
    p <- plot_ly(data = cor_melted, x = ~Variable1, y = ~Variable2, z = ~Correlation,
                 type = "heatmap", colorscale = list(c(0, "blue"), c(0.5, "white"), c(1, "red")),
                 zmin = -1, zmax = 1)
    
    p <- p %>% layout(
      title = "",
      xaxis = list(title = ""),
      yaxis = list(title = ""),
      coloraxis = list(
        colorbar = list(
          title = "Correlation",
          tickvals = c(-1, 0, 1), 
          ticktext = c("-1", "0", "1"),
          len = 5, 
          thickness = 20, 
          x = 0,
          xanchor = "left", 
          ticks = "outside"
        )
      )
    )
    return(p)
  })
  
  observeEvent(input$stopButton, {
    stopApp()  
  })
}
shinyApp(ui = ui, server = server)

Shiny applications not supported in static R Markdown documents

Finally, here you can see an interactive map of the correlation between Conflict and the SDG goals with 1 year gap. Nothing?

Code

library(shiny)
library(plotly)

Q3.3 <- Q3.3 %>%
  arrange(code,year)%>%
  group_by(code)

conflicts_filtered <- Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe", "Caucasus and Central Asia"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected", "sum_deaths", "maxintensity")

subset_data <- conflicts_filtered[, relevant_columns]

correlation_matrix_Conflicts_Pop_Aff <- cor(subset_data, subset_data[, c("pop_affected", "sum_deaths", "maxintensity")])

cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Aff))
names(cor_melted) <- c("Variable1", "Variable2", "Correlation")

ui <- fluidPage(
  titlePanel("Interactive Correlation Heatmap between Conflicts in selected regions and the SDG goal with one year gap"),
  plotlyOutput("heatmap"),
  sliderInput("year", "Select Year", min = 2000, max = 2016, value = 2005, step = 1),
  actionButton("stopButton", "Stop application")
)

server <- function(input, output, session) {
  selected_conflicts_data <- reactive({
    filtered_data <- conflicts_filtered[conflicts_filtered$year == input$year, ]
    subset_data <- filtered_data[, relevant_columns]
    return(subset_data)
  })
  
  output$heatmap <- renderPlotly({
    correlation_matrix_Conflicts_Pop_Aff <- cor(selected_conflicts_data(), selected_conflicts_data()[, c("pop_affected", "sum_deaths", "maxintensity")])
    cor_melted <- as.data.frame(as.table(correlation_matrix_Conflicts_Pop_Aff))
    names(cor_melted) <- c("Variable1", "Variable2", "Correlation")
    
    p <- plot_ly(data = cor_melted, x = ~Variable1, y = ~Variable2, z = ~Correlation,
                 type = "heatmap", colorscale = list(c(-1, "blue"), c(0, "white"), c(1, "red")),
                 zmin = -1, zmax = 1)
    
    p <- p %>% layout(
      title = "",
      xaxis = list(title = ""),
      yaxis = list(title = ""),
      coloraxis = list(
        colorbar = list(
          title = "Correlation",
          tickvals = c(-1, 0, 1), 
          ticktext = c("-1", "0", "1"), 
          len = 5,  
          thickness = 20,  
          x = 0,  
          xanchor = "left", 
          ticks = "outside" 
        )
      )
    )
    return(p)
  })
  
  observeEvent(input$stopButton, {
    stopApp() 
  })
}
shinyApp(ui = ui, server = server)

Shiny applications not supported in static R Markdown documents

The results seems logic because if the SDG scores continue to go higher and the conflicts remains the same or finishes we get a negative correlation

Our last idea is to see the regressions between the SDG scores and the variables of each event that we thought interesting.

3.5.3 Regressions between the SDG scores and the events variables.

Let’s see the regressions for each score depending of each variable in the disasters dataset (total_affected and total_deaths)

Code

library(shiny)
library(dplyr)
library(ggplot2)
library(scales)

disaster_data <- Q3.1[Q3.1$region %in% c("South Asia", "East Asia", "North America"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "total_affected", "total_deaths")

subset_data <- disaster_data[, relevant_columns]

goal_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16")


ui <- fluidPage(
  titlePanel("SDG and Climate Disasters Regression Analysis"),
  sidebarLayout(
    sidebarPanel(
      selectInput("sdg", "Select SDG Goal:",
                  choices = goal_columns,
                  selected = goal_columns[1]),
      width = 3
    ),
    mainPanel(
      width = 9,
      plotOutput("regression_plot_affected"),
      plotOutput("regression_plot_deaths")
    )
  )
)

server <- function(input, output) {
  generate_regression_plot <- function(selected_goal) {
    formula_affected <- as.formula(paste(selected_goal, "~ total_affected"))
    formula_deaths <- as.formula(paste(selected_goal, "~ total_deaths"))
    
    lm_total_affected <- lm(formula_affected, data = subset_data)
    lm_total_deaths <- lm(formula_deaths, data = subset_data)
    
    plot_total_affected <- ggplot(subset_data, aes(x = total_affected, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Total Affected"),
           x = "Total Affected", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    plot_total_deaths <- ggplot(subset_data, aes(x = total_deaths, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Total Deaths"),
           x = "Total Deaths", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    
    list(plot_total_affected, plot_total_deaths)
  }
  
  output$regression_plot_affected <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[1]])
  })
  
  output$regression_plot_deaths <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[2]])
  })
}
shinyApp(ui, server)

Shiny applications not supported in static R Markdown documents

Most relationships between the goals and the variables (‘total_affected’ and ‘total_deaths’) are not statistically significant (indicated by p-values > 0.05). More specifically, in several models, the coefficients for ‘total_affected’ and ‘total_deaths’ are small, indicating weak or negligible relationships with the respective goals. Some models have marginally significant p-values (close to 0.05) but still lack statistical significance. Goals 7, 8, 10, 13, 14, and 15 exhibit statistically significant relationships with ‘total_affected,’ indicating small to moderate positive relationships. Goals 7 and 8 also show statistically significant relationships with ‘total_deaths,’ indicating moderate negative relationships.

These findings suggest that, in most cases, the relationships between the specified goals and COVID-19 variables (total affected and total deaths) are not statistically significant. However, some goals do indicate small to moderate associations with these variables.

Let’s see the regressions for each score depending of each variable in the COVID-19 dataset (stringency, cases_per_million and deaths_per_million).

Code

covid_filtered <- Q3.2

relevant_columns <- c(
  "goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "stringency", "cases_per_million", "deaths_per_million"
)
subset_data <- covid_filtered[, relevant_columns]

goal_columns <- c(
  "goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16"
)

ui <- fluidPage(
  titlePanel("SDG - COVID Regression Analysis"),
  sidebarLayout(
    sidebarPanel(
      selectInput("sdg", "Select SDG Goal:",
                  choices = goal_columns,
                  selected = goal_columns[1]),
      width = 3
    ),
    mainPanel(
      width = 9,
      plotOutput("regression_plot_stringency"),
      plotOutput("regression_plot_cases"),
      plotOutput("regression_plot_deaths")
    )
  )
)

server <- function(input, output) {
  generate_regression_plot <- function(selected_goal) {
    formula_stringency <- as.formula(paste(selected_goal, "~ stringency"))
    formula_cases <- as.formula(paste(selected_goal, "~ cases_per_million"))
    formula_deaths <- as.formula(paste(selected_goal, "~ deaths_per_million"))
    
    lm_stringency <- lm(formula_stringency, data = subset_data)
    lm_cases <- lm(formula_cases, data = subset_data)
    lm_deaths <- lm(formula_deaths, data = subset_data)
    
    plot_stringency <- ggplot(subset_data, aes(x = stringency, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Stringency"),
           x = "Stringency", y = selected_goal) +
      scale_x_continuous(labels = comma_format())
    
    plot_cases <- ggplot(subset_data, aes(x = cases_per_million, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Cases per Million"),
           x = "Cases per Million", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    plot_deaths <- ggplot(subset_data, aes(x = deaths_per_million, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Deaths per Million"),
           x = "Deaths per Million", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    list(plot_stringency, plot_cases, plot_deaths)
  }
  
  output$regression_plot_stringency <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[1]])  # Display plot for stringency vs selected_goal
  })
  
  output$regression_plot_cases <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[2]])  # Display plot for cases_per_million vs selected_goal
  })
  
  output$regression_plot_deaths <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[3]])  # Display plot for deaths_per_million vs selected_goal
  })
}

shinyApp(ui, server)

Shiny applications not supported in static R Markdown documents

For all objectives (from objective 1 to objective 16), the predictor variables (stringency, number of cases per million and number of deaths per million) show statistically significant relationships. However, when assessed individually, these predictors explain only a marginal fraction of the variance of the respective objectives, with explanatory percentages ranging from around 0.141% to 6.99%. Adjusted R-squared values consistently indicate limited explanatory power for these relationships, implying the influence of other factors not accounted for in the variations observed for each objective. Relying solely on rigor, cases per million and deaths per million results in modest predictive capabilities for each objective.

In summary, the statistical significance of stringency, cases per million and deaths per million in relation to each objective is clear. However, these predictive variables individually fail to explain the variations observed, highlighting the need to explore additional variables or unexplored factors in order to significantly improve predictive ability for each respective objective.

Now, let’s see the regressions for each SDG score depending of each variable in the Conflicts dataset (pop_affected and sum_deaths).

Code

conflicts_filtered <- Q3.3[Q3.3$region %in% c("Middle East & North Africa", "Sub-Saharan Africa", "South Asia", "Latin America & the Caribbean", "Eastern Europe", "Caucasus and Central Asia"), ]

relevant_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16", "pop_affected", "sum_deaths", "maxintensity")

subset_data <- conflicts_filtered[, relevant_columns]

goal_columns <- c("goal1", "goal2", "goal3", "goal4", "goal5", "goal6", "goal7", "goal8", "goal9", "goal10", "goal11", "goal12", "goal13", "goal15", "goal16"
)

ui <- fluidPage(
  titlePanel("SDG - Conflicts Regression Analysis"),
  sidebarLayout(
    sidebarPanel(
      selectInput("sdg", "Select SDG Goal:",
                  choices = goal_columns,
                  selected = goal_columns[1]),
      width = 3
    ),
    mainPanel(
      width = 9,
      plotOutput("regression_plot_pop_affected"),
      plotOutput("regression_plot_sum_deaths"),
      plotOutput("regression_plot_maxintensity")
    )
  )
)

server <- function(input, output) {
  generate_regression_plot <- function(selected_goal) {
    formula_pop_affected <- as.formula(paste(selected_goal, "~ pop_affected"))
    formula_sum_deaths <- as.formula(paste(selected_goal, "~ sum_deaths"))
    formula_maxintensity <- as.formula(paste(selected_goal, "~maxintensity"))
    
    lm_pop_affected <- lm(formula_pop_affected, data = subset_data)
    lm_sum_deaths <- lm(formula_sum_deaths, data = subset_data)
    lm_maxintensity <- lm(formula_maxintensity, data = subset_data)
    
    plot_pop_affected <- ggplot(subset_data, aes(x = pop_affected, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Population Affected"),
           x = "Population Affected", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    plot_sum_deaths <- ggplot(subset_data, aes(x = sum_deaths, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Sum of Deaths"),
           x = "Sum of Deaths", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    plot_maxintensity <- ggplot(subset_data, aes(x = sum_deaths, y = !!as.name(selected_goal))) +
      geom_point() +
      geom_smooth(method = "lm", se = FALSE) +
      labs(title = paste("Regression plot for", selected_goal, "vs Maxintensity"),
           x = "Maxintensity", y = selected_goal) +
      scale_x_continuous(labels = comma_format()) 
    
    list(plot_pop_affected, plot_sum_deaths, plot_maxintensity)
  }
  
  output$regression_plot_pop_affected <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[1]])  
  })
  
  output$regression_plot_sum_deaths <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[2]])  
  })
  
    output$regression_plot_maxintensity <- renderPlot({
    selected_goal <- input$sdg
    regression_plots <- generate_regression_plot(selected_goal)
    print(regression_plots[[3]])  
  })
}

shinyApp(ui, server)

Shiny applications not supported in static R Markdown documents

All three predictors exhibited statistically significant relationships with the respective goals across the board. ‘Maxintensity’ generally demonstrated a relatively stronger association compared to ‘Population Affected’ and ‘Deaths’ in most analyses. But collectively, these predictors explained only a small to moderate portion of the variability observed in the different goals (adjusted R-squared ranging from approximately 1% to 9.48%). This suggests that there are other unaccounted factors not included in the analysis that significantly influence the outcomes of these goals. To conclude, while ‘Population Affected,’ ‘Deaths,’ and ‘Maxintensity’ consistently showed significant associations with the various goals analyzed, their combined effect explained only a fraction of the variance observed in these goals. Therefore, there are likely additional crucial factors beyond these predictors that play substantial roles in influencing the outcomes of the respective goals.

4 Analysis

4.1 Answers to the research questions

4.1.1 Influence of the factors over the Sustainable Development Goals

In order to answer the first question of our work, let’s start by zooming on the correlation matrix heatmap made in our EDA part. Here are the correlations between the SDG goals and all the other variables except the SDG goals.

Code

### Correlation Matrix Heatmap SDG/Other variables ###

#computing pvals of our interested variables
corr_matrix <- cor(data_question1[7:40], method = "spearman", use = "everything")
p_matrix2 <- matrix(nrow = ncol(data_question1[7:40]), ncol = ncol(data_question1[7:40]))
for (i in 1:ncol(data_question1[7:40])) {
  for (j in 1:ncol(data_question1[7:40])) {
    test_result <- cor.test(data_question1[7:40][, i], data_question1[7:40][, j])
    p_matrix2[i, j] <- test_result$p.value
  }
}

corr_matrix[which(p_matrix2 > 0.05)] <- NA #only keeping significant pval alpha = 0.05

melted_corr_matrix_GVar <- melt(corr_matrix[19:34,1:18])

ggplot(melted_corr_matrix_GVar, aes(Var1, Var2, fill = value)) +
  geom_tile() +
  geom_text(aes(label = ifelse(!is.na(value) & abs(value) > 0.75, sprintf("%.2f", value), '')),
            color = "black", size = 2) +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Spearman\nCorrelation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1),
        axis.text.y = element_text(angle = 45, hjust = 1)) +
  labs(x = 'Goals', y = 'Goals',
       title = 'Correlations Heatmap between goals and our other variables')

The numbers are representing the significant pval between our variables. The grey parts are the non significant pvals.

GDP per capita, internet_usage, pf_law or ef_legal are strongely correlated with most of our SDG goals. This is mostly due to the large scope englobed in these variables. That makes them influence various sectors of our economies and thus, mostly impacting all our SDG goals. Therefore, we can think that these variables have a strong impact on the scores. Nevertheless, as correlation doesn’t mean causality, we cannot jump to conclusions.

As we can see, our SDG goals 12 & 13 (responsible consumption & production and climate action) are negatively correlated with most of our variables, as is the economic freedom government variable to our SDG goals. Nevertheless, goals 12 & 13 and ef_government are positively correlated together.

Now let’s zoom on the correlations between all our variables except the SDG goals: ::: {.cell layout-align=“center” hash=‘report_cache/html/unnamed-chunk-305_19d4d908db6baf1e7f97cc7d1a92bc85’}

Code
melted_corr_matrix_Var <- melt(corr_matrix[19:34,19:34])
ggplot(melted_corr_matrix_Var, aes(Var1, Var2, fill = value)) +
  geom_tile() +
  geom_text(aes(label = ifelse(!is.na(value) & abs(value) > 0.75, sprintf("%.2f", value), '')),
            color = "black", size = 1.7) +
  scale_fill_gradient2(low = "blue", high = "red", mid = "white",
                       midpoint = 0, limit = c(-1, 1), space = "Lab",
                       name = "Spearman\nCorrelation") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1),
        axis.text.y = element_text(angle = 45, hjust = 1)) +
  labs(x = 'Goals', y = 'Goals',
       title = 'Correlations Heatmap between other variables than SDG goals')

:::

As noticed earlier, there is a strong correlation among personal freedom variables (pf), reflecting scores from the Human Freedom Index on movement, religion, assembly, and expression.

Again, we can see that GDP per capita, pf_law, ef_legal are highly correlated with some other variables. On another hand, we notice that pf_movement, pf_assembly, pf_expression are now also higly correlated with some of the other variables.

In order to have a look at the influence of some factors over our dependent variables, let’s conduct a Principal Component Analysis over the Human Freedom Index Scores.

Code
#### PCA and PCA Scree plot####

myPCA_s <- PCA(data_question1[,29:40], graph = FALSE)
fviz_eig(myPCA_s,
         addlabels = TRUE) +
  theme_minimal()
summary(myPCA_s)
#> 
#> Call:
#> PCA(X = data_question1[, 29:40], graph = FALSE) 
#> 
#> 
#> Eigenvalues
#>                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
#> Variance               6.710   1.577   1.014   0.731   0.507   0.419
#> % of var.             55.915  13.140   8.453   6.093   4.222   3.491
#> Cumulative % of var.  55.915  69.055  77.507  83.601  87.823  91.314
#>                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11  Dim.12
#> Variance               0.287   0.218   0.192   0.168   0.106   0.070
#> % of var.              2.395   1.820   1.602   1.402   0.882   0.585
#> Cumulative % of var.  93.710  95.530  97.132  98.533  99.415 100.000
#> 
#> Individuals (the 10 first)
#>                   Dist    Dim.1    ctr   cos2    Dim.2    ctr   cos2
#> 1             |  2.143 | -0.207  0.000  0.009 |  1.261  0.045  0.346
#> 2             |  2.085 | -0.135  0.000  0.004 |  1.325  0.050  0.404
#> 3             |  2.413 |  0.027  0.000  0.000 |  1.656  0.078  0.471
#> 4             |  2.529 |  0.530  0.002  0.044 |  1.430  0.058  0.320
#> 5             |  2.416 |  0.364  0.001  0.023 |  1.272  0.046  0.277
#> 6             |  2.277 |  0.378  0.001  0.028 |  1.146  0.037  0.253
#> 7             |  2.320 |  0.613  0.003  0.070 |  1.196  0.041  0.266
#> 8             |  2.605 |  0.726  0.004  0.078 |  1.614  0.074  0.384
#> 9             |  2.335 |  0.850  0.005  0.132 |  1.287  0.047  0.304
#> 10            |  2.183 |  0.909  0.006  0.173 |  0.982  0.027  0.202
#>                  Dim.3    ctr   cos2  
#> 1             | -0.542  0.013  0.064 |
#> 2             | -0.253  0.003  0.015 |
#> 3             |  0.176  0.001  0.005 |
#> 4             |  0.990  0.043  0.153 |
#> 5             |  0.579  0.015  0.057 |
#> 6             |  0.341  0.005  0.022 |
#> 7             |  0.494  0.011  0.045 |
#> 8             |  0.411  0.007  0.025 |
#> 9             |  0.292  0.004  0.016 |
#> 10            |  0.214  0.002  0.010 |
#> 
#> Variables (the 10 first)
#>                  Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
#> pf_law        |  0.871 11.310  0.759 | -0.301  5.732  0.090 | -0.110
#> pf_security   |  0.578  4.984  0.334 | -0.446 12.630  0.199 | -0.208
#> pf_movement   |  0.837 10.432  0.700 |  0.282  5.028  0.079 | -0.148
#> pf_religion   |  0.704  7.392  0.496 |  0.537 18.285  0.288 | -0.299
#> pf_assembly   |  0.839 10.482  0.703 |  0.404 10.343  0.163 | -0.206
#> pf_expression |  0.890 11.814  0.793 |  0.171  1.855  0.029 | -0.241
#> pf_identity   |  0.668  6.650  0.446 | -0.007  0.003  0.000 |  0.034
#> ef_government | -0.154  0.354  0.024 |  0.779 38.445  0.606 |  0.435
#> ef_legal      |  0.871 11.314  0.759 | -0.302  5.791  0.091 |  0.052
#> ef_money      |  0.690  7.104  0.477 | -0.128  1.047  0.017 |  0.544
#>                  ctr   cos2  
#> pf_law         1.189  0.012 |
#> pf_security    4.245  0.043 |
#> pf_movement    2.164  0.022 |
#> pf_religion    8.814  0.089 |
#> pf_assembly    4.167  0.042 |
#> pf_expression  5.703  0.058 |
#> pf_identity    0.113  0.001 |
#> ef_government 18.631  0.189 |
#> ef_legal       0.262  0.003 |
#> ef_money      29.130  0.295 |

Code
#### PCA Biplot ####
fviz_pca_biplot(myPCA_s,
                label="var",
                col.var="dodgerblue3",
                geom="point",
                pointsize = 0.1,
                labelsize = 5) +
  theme_minimal()

Now concerning the Human Freedom Index scores, most of the variables are positively correlated to the dimension 1, slightly less for the PF religion, and finally the EF government variable is slighlty incorrelated to the dimension 1. With a eigenvalue bigger than 1 for the three first components, we conclude that there are 3 dimensions to take into account. Nevertheless, again, they are explaining less than 80% of cumulated variance. Therefore, the rule of thumb would suggest us to take 4 dimensions into account.

Let’s try now to conduct a cluster analysis, using the Kmean method.

Code
data_kmean_country <- data_question1 %>% dplyr::select(-c(X,code,year,continent,region, population))

#filter data different than 0 and dropping observations 
filtered_data <- data_kmean_country %>%
  group_by(country) %>%
  filter_if(is.numeric, all_vars(sd(.) != 0)) %>%
  ungroup()

scale_by_country <- filtered_data %>% #scale data
  group_by(country) %>% 
  summarise_all(~ scale(.))

means_by_country <- scale_by_country %>% #mean by country
  group_by(country) %>%
  summarise_all(~ mean(., na.rm = TRUE))

rownames(means_by_country) <- seq_along(row.names(means_by_country))

fviz_nbclust(means_by_country[,-1], kmeans, method="wss")

After adapting the data for conducting our cluster analysis, we can see that according the the elbow method that we would only need 4 clusters in our analysis.

Code
kmean <- kmeans(means_by_country[,-1], 4, nstart = 25)
fviz_cluster(kmean, data=means_by_country[,-1], repel=FALSE, depth =NULL, ellipse.type = "norm", labelsize = 10, pointsize = 0.5)

Our cluster analysis gives us one principal cluster (here in purple) –> CENTERED ON 0 BECAUSE AFTER DATA SCALED-> REALLY SMALL VALUES –> HOW TO DEAL WITH IT? I TRIED TO TAKE ONLY HFI INTO ACCOUNT BUT NOT WORKING NEITHER. STILL CENTERED ON 0.

##Regressions

While considering our regressions, we have noticed that we had high multicolinearity between our dependent variables in our models. This is due to the numerous variables that we tried to take into account while computing our regressions. Let’s find a model that could explain the overall SDG score without having severe multicollinearity (VIF > 5)

Code
goals_data <- data_question1 %>%
  dplyr::select(overallscore, unemployment.rate, GDPpercapita, MilitaryExpenditurePercentGDP, internet_usage, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)

fit <- lm(overallscore ~ ., data = goals_data)
# plot(fit)
library(leaps)
leaps<-regsubsets(overallscore ~ .,data=goals_data,nbest=10)
# summary(leaps)
plot(leaps,scale="r2") + theme_minimal()
#> NULL

The model found is taking into account the following dependent variables: unemployment rate, military expenditure percentage of GDP, internet_usage, pf_security, pf_religion, pf_identity, ef_legal, ef_trade. We notice here that the previous variables highly correlated to the SDG goals (GDP per capita, pf_law, internet_usage and ef_legal), we dropped the first two ones.

Code
#### Forward selection ####

library(MASS)
Forward_data1 <- data_question1 %>% dplyr::select(overallscore, unemployment.rate, GDPpercapita, MilitaryExpenditurePercentGDP, internet_usage, pf_law, pf_security, pf_movement, pf_religion, pf_assembly, pf_expression, pf_identity, ef_government, ef_legal, ef_money, ef_trade, ef_regulation)
# Initialize variables to store the results
step_results <- data.frame(step = integer(), aic = numeric(), adjusted_r_squared = numeric())

# Initial model (null model)
current_model <- lm(overallscore ~ 1, data = Forward_data1)

# Record initial metrics
step_results <- rbind(step_results, data.frame(step = 0, aic = AIC(current_model), adjusted_r_squared = summary(current_model)$adj.r.squared))

# Perform forward selection
for (variable in colnames(Forward_data1)[grepl("goal", colnames(Forward_data1))]) {
    current_model <- update(current_model, paste(". ~ . +", variable))
    current_step <- nrow(step_results) + 1
    step_results <- rbind(step_results, data.frame(step = current_step, aic = AIC(current_model), adjusted_r_squared = summary(current_model)$adj.r.squared))
}

ggplot(step_results, aes(x = step)) +
    geom_line(aes(y = aic, color = "AIC")) +
    geom_line(aes(y = adjusted_r_squared * 100, color = "Adjusted R-squared")) +
    labs(title = "Forward Selection Process", x = "Step", y = "Metric Value") +
    scale_color_manual("", breaks = c("AIC", "Adjusted R-squared"), values = c("blue", "red"))

Now let’s compute our regression model with the variables selected by our stepwise methode

Code
# Your R code for the regression and stargazer output goes here
reg_overall_Q1 <- lm(overallscore ~ unemployment.rate + MilitaryExpenditurePercentGDP + internet_usage + pf_security + pf_religion + pf_identity + ef_legal + ef_trade, data = data_question1)

sg1 <- stargazer(reg_overall_Q1,
          title="Impact of variables over Overallscore SDG goals",
          type='text',
          digits=3)
#> 
#> Impact of variables over Overallscore SDG goals
#> =========================================================
#>                                   Dependent variable:    
#>                               ---------------------------
#>                                      overallscore        
#> ---------------------------------------------------------
#> unemployment.rate                      14.200***         
#>                                         (1.860)          
#>                                                          
#> MilitaryExpenditurePercentGDP          0.604***          
#>                                         (0.096)          
#>                                                          
#> internet_usage                         15.600***         
#>                                         (0.482)          
#>                                                          
#> pf_security                            0.609***          
#>                                         (0.072)          
#>                                                          
#> pf_religion                            -0.804***         
#>                                         (0.072)          
#>                                                          
#> pf_identity                            0.839***          
#>                                         (0.057)          
#>                                                          
#> ef_legal                               1.540***          
#>                                         (0.113)          
#>                                                          
#> ef_trade                               1.580***          
#>                                         (0.109)          
#>                                                          
#> Constant                               33.400***         
#>                                         (0.822)          
#>                                                          
#> ---------------------------------------------------------
#> Observations                             2,226           
#> R2                                       0.822           
#> Adjusted R2                              0.822           
#> Residual Std. Error                4.670 (df = 2217)     
#> F Statistic                   1,282.000*** (df = 8; 2217)
#> =========================================================
#> Note:                         *p<0.1; **p<0.05; ***p<0.01

As we can see, all of the variables above are significantly impacting the overall score of our Sustainable Development Goals. In addition, our Radjusted is high enough, which means that our model is well explained.

Code
##### geom point #####

geom1 <- ggplot(data_question1, aes(internet_usage, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and internet usage")

geom2 <- ggplot(data_question1, aes(unemployment.rate, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and unemployment rate")

geom3 <- ggplot(data_question1, aes(MilitaryExpenditurePercentGDP,overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and military expenditure")

geom4 <- ggplot(data_question1, aes(pf_security,overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and pf_security")

geom5 <-ggplot(data_question1, aes(pf_religion, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and pf_religion")

geom7 <-ggplot(data_question1, aes(pf_identity, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and pf_identity")

geom8 <-ggplot(data_question1, aes(ef_legal, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and ef_legal")

geom9 <-ggplot(data_question1, aes(ef_trade, overallscore)) +
  geom_point()+ geom_smooth(se = FALSE) +
  labs(title = "Scarplot overallscore and ef_trade")

grid.arrange(geom1, geom2, geom3, geom4, geom5, geom7, geom8, geom9, nrow=3, ncol=3)

By checking the influence of the chosen variables over the overallscore, we can see that the functions are not linear. For some, such as internet_usage and ef_legal, we notice that the more the variable increase, the more it influence positively the overall score. For the others, the relations are more complex. I.e.: Unemployment.rate increase mostly the overallscore between 0 and 10%. pf_identity is slowly reducing the overallscore before going back up.

In conclusion, after reviewing which variables are correlating between themself, after taking care of multicollinearity problems and doing our regressions on our overall SDG score and finally seeing the influence of these dependent variables depending on their range, we notice that most of our variables taken into account in our model is significant in explaining their influence (positive or negative) over the overall SDG goals. As our goals are mostly correlated between eachother, we can presume that taking the overall score as our dependent variable is giving us the same conclusion. Nevertheless, we still need to go deeper and check the influence of the scores between themself.

5 Conclusion

5.1 Take home message

  • R.I.P Shiny

5.2 Limitations

5.3 Future work?